Test Report: Docker_Linux_crio_arm64 21643

                    
                      cc42fd2f8cec8fa883ff6f7397a2f6141c487062:2025-10-02:41725
                    
                

Test fail (47/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.3
35 TestAddons/parallel/Registry 14.57
36 TestAddons/parallel/RegistryCreds 0.54
37 TestAddons/parallel/Ingress 144.85
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 6.41
41 TestAddons/parallel/CSI 55.08
42 TestAddons/parallel/Headlamp 3.15
43 TestAddons/parallel/CloudSpanner 6.26
44 TestAddons/parallel/LocalPath 8.42
45 TestAddons/parallel/NvidiaDevicePlugin 6.26
46 TestAddons/parallel/Yakd 5.26
52 TestForceSystemdFlag 513.62
53 TestForceSystemdEnv 512.96
98 TestFunctional/parallel/ServiceCmdConnect 603.54
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.06
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
129 TestFunctional/parallel/ServiceCmd/DeployApp 600.84
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
148 TestFunctional/parallel/ServiceCmd/Format 0.39
149 TestFunctional/parallel/ServiceCmd/URL 0.39
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 508.4
175 TestMultiControlPlane/serial/DeleteSecondaryNode 2.37
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.22
177 TestMultiControlPlane/serial/StopCluster 2.71
178 TestMultiControlPlane/serial/RestartCluster 477.29
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 5.35
180 TestMultiControlPlane/serial/AddSecondaryNode 4.79
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 5.31
191 TestJSONOutput/pause/Command 2.51
197 TestJSONOutput/unpause/Command 1.74
250 TestScheduledStopUnix 33.74
281 TestPause/serial/Pause 7.14
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.95
303 TestStartStop/group/old-k8s-version/serial/Pause 6.12
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.62
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.36
321 TestStartStop/group/no-preload/serial/Pause 7.31
327 TestStartStop/group/embed-certs/serial/Pause 7.63
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.54
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.39
343 TestStartStop/group/newest-cni/serial/Pause 7.69
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.29
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable volcano --alsologtostderr -v=1: exit status 11 (300.88976ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:44:28.224872  300967 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:44:28.226366  300967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:28.226451  300967 out.go:374] Setting ErrFile to fd 2...
	I1002 06:44:28.226474  300967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:28.226787  300967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:44:28.227195  300967 mustload.go:65] Loading cluster: addons-067378
	I1002 06:44:28.227636  300967 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:28.227676  300967 addons.go:606] checking whether the cluster is paused
	I1002 06:44:28.227822  300967 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:28.227862  300967 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:44:28.228354  300967 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:44:28.245916  300967 ssh_runner.go:195] Run: systemctl --version
	I1002 06:44:28.245981  300967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:44:28.264701  300967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:44:28.361759  300967 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:44:28.361864  300967 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:44:28.393185  300967 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:44:28.393209  300967 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:44:28.393215  300967 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:44:28.393219  300967 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:44:28.393222  300967 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:44:28.393226  300967 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:44:28.393229  300967 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:44:28.393232  300967 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:44:28.393236  300967 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:44:28.393243  300967 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:44:28.393247  300967 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:44:28.393250  300967 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:44:28.393258  300967 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:44:28.393261  300967 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:44:28.393264  300967 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:44:28.393274  300967 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:44:28.393281  300967 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:44:28.393288  300967 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:44:28.393297  300967 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:44:28.393300  300967 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:44:28.393304  300967 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:44:28.393308  300967 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:44:28.393311  300967 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:44:28.393314  300967 cri.go:89] found id: ""
	I1002 06:44:28.393367  300967 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:44:28.409313  300967 out.go:203] 
	W1002 06:44:28.412412  300967 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:44:28.412441  300967 out.go:285] * 
	* 
	W1002 06:44:28.423253  300967 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:44:28.426479  300967 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.911792ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-w2szx" [b634a53f-990a-4739-a9b3-2cf22c99e147] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003795506s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-zrq82" [76bc889e-53d2-4b4b-89a1-527536fef260] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00412765s
addons_test.go:392: (dbg) Run:  kubectl --context addons-067378 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-067378 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-067378 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.042912874s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 ip
2025/10/02 06:44:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable registry --alsologtostderr -v=1: exit status 11 (272.121277ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:44:52.996600  301900 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:44:52.997407  301900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:52.997422  301900 out.go:374] Setting ErrFile to fd 2...
	I1002 06:44:52.997428  301900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:52.997728  301900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:44:52.998012  301900 mustload.go:65] Loading cluster: addons-067378
	I1002 06:44:52.998364  301900 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:52.998374  301900 addons.go:606] checking whether the cluster is paused
	I1002 06:44:52.998542  301900 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:52.998567  301900 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:44:52.999048  301900 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:44:53.036757  301900 ssh_runner.go:195] Run: systemctl --version
	I1002 06:44:53.036816  301900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:44:53.061511  301900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:44:53.157767  301900 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:44:53.157919  301900 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:44:53.191714  301900 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:44:53.191741  301900 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:44:53.191746  301900 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:44:53.191750  301900 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:44:53.191754  301900 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:44:53.191758  301900 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:44:53.191761  301900 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:44:53.191764  301900 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:44:53.191767  301900 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:44:53.191774  301900 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:44:53.191777  301900 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:44:53.191781  301900 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:44:53.191784  301900 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:44:53.191787  301900 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:44:53.191790  301900 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:44:53.191803  301900 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:44:53.191807  301900 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:44:53.191812  301900 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:44:53.191815  301900 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:44:53.191818  301900 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:44:53.191823  301900 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:44:53.191826  301900 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:44:53.191829  301900 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:44:53.191833  301900 cri.go:89] found id: ""
	I1002 06:44:53.191887  301900 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:44:53.207499  301900 out.go:203] 
	W1002 06:44:53.210468  301900 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:44:53.210496  301900 out.go:285] * 
	* 
	W1002 06:44:53.215606  301900 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:44:53.218594  301900 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.57s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.459119ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-067378
addons_test.go:332: (dbg) Run:  kubectl --context addons-067378 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (242.888137ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:45:37.225175  303093 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:45:37.226148  303093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:37.226195  303093 out.go:374] Setting ErrFile to fd 2...
	I1002 06:45:37.226215  303093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:37.226526  303093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:45:37.226973  303093 mustload.go:65] Loading cluster: addons-067378
	I1002 06:45:37.227440  303093 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:37.227488  303093 addons.go:606] checking whether the cluster is paused
	I1002 06:45:37.227625  303093 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:37.227669  303093 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:45:37.228306  303093 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:45:37.246376  303093 ssh_runner.go:195] Run: systemctl --version
	I1002 06:45:37.246436  303093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:45:37.263838  303093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:45:37.357619  303093 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:45:37.357702  303093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:45:37.386785  303093 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:45:37.386806  303093 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:45:37.386811  303093 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:45:37.386815  303093 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:45:37.386819  303093 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:45:37.386823  303093 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:45:37.386827  303093 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:45:37.386831  303093 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:45:37.386834  303093 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:45:37.386840  303093 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:45:37.386843  303093 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:45:37.386846  303093 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:45:37.386850  303093 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:45:37.386853  303093 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:45:37.386856  303093 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:45:37.386874  303093 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:45:37.386882  303093 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:45:37.386888  303093 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:45:37.386891  303093 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:45:37.386895  303093 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:45:37.386899  303093 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:45:37.386907  303093 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:45:37.386911  303093 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:45:37.386914  303093 cri.go:89] found id: ""
	I1002 06:45:37.386964  303093 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:45:37.402447  303093 out.go:203] 
	W1002 06:45:37.405260  303093 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:45:37.405289  303093 out.go:285] * 
	* 
	W1002 06:45:37.410316  303093 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:45:37.413140  303093 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-067378 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-067378 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-067378 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7ce8563f-18ee-417b-840c-ec2f4596b7df] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7ce8563f-18ee-417b-840c-ec2f4596b7df] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003711896s
I1002 06:45:15.642987  294357 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.934284922s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-067378 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-067378
helpers_test.go:243: (dbg) docker inspect addons-067378:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743",
	        "Created": "2025-10-02T06:42:00.285266979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:42:00.437236977Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/hostname",
	        "HostsPath": "/var/lib/docker/containers/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/hosts",
	        "LogPath": "/var/lib/docker/containers/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743-json.log",
	        "Name": "/addons-067378",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-067378:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-067378",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743",
	                "LowerDir": "/var/lib/docker/overlay2/27be614f558d0a8c3c52c831d477e8c5c9e368d506c2a9434a912568103adf6f-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/27be614f558d0a8c3c52c831d477e8c5c9e368d506c2a9434a912568103adf6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/27be614f558d0a8c3c52c831d477e8c5c9e368d506c2a9434a912568103adf6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/27be614f558d0a8c3c52c831d477e8c5c9e368d506c2a9434a912568103adf6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-067378",
	                "Source": "/var/lib/docker/volumes/addons-067378/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-067378",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-067378",
	                "name.minikube.sigs.k8s.io": "addons-067378",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e183065c5c1950ede2433c49d3f8899bad7fc9dd4dcfd4ca487ce9abfcd56f29",
	            "SandboxKey": "/var/run/docker/netns/e183065c5c19",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-067378": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:20:fc:31:81:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de8269b06b79a3e18d05347fbb9c73f4a624138eb10bd2509355bfcb5f7a406e",
	                    "EndpointID": "fb9d1ced2a7c935d95b479062b33b33a16e640c84101c4e40ed28a1f530269cf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-067378",
	                        "be6899c5910e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-067378 -n addons-067378
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-067378 logs -n 25: (1.433854648s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-396070                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-396070 │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ start   │ --download-only -p binary-mirror-242470 --alsologtostderr --binary-mirror http://127.0.0.1:34303 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-242470   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ delete  │ -p binary-mirror-242470                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-242470   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ addons  │ disable dashboard -p addons-067378                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ addons  │ enable dashboard -p addons-067378                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ start   │ -p addons-067378 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:44 UTC │
	│ addons  │ addons-067378 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │                     │
	│ addons  │ addons-067378 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │                     │
	│ addons  │ enable headlamp -p addons-067378 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │                     │
	│ addons  │ addons-067378 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │                     │
	│ ip      │ addons-067378 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │ 02 Oct 25 06:44 UTC │
	│ addons  │ addons-067378 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │                     │
	│ addons  │ addons-067378 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │                     │
	│ addons  │ addons-067378 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │                     │
	│ ssh     │ addons-067378 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │                     │
	│ addons  │ addons-067378 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │                     │
	│ addons  │ addons-067378 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-067378                                                                                                                                                                                                                                                                                                                                                                                           │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │ 02 Oct 25 06:45 UTC │
	│ addons  │ addons-067378 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │                     │
	│ addons  │ addons-067378 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │                     │
	│ addons  │ addons-067378 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │                     │
	│ ssh     │ addons-067378 ssh cat /opt/local-path-provisioner/pvc-5c575a42-27bf-44ea-b0d8-7a407f2814bc_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │ 02 Oct 25 06:45 UTC │
	│ addons  │ addons-067378 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:45 UTC │                     │
	│ addons  │ addons-067378 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:46 UTC │                     │
	│ ip      │ addons-067378 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:47 UTC │ 02 Oct 25 06:47 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:41:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:41:33.571837  295123 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:41:33.571950  295123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:33.571961  295123 out.go:374] Setting ErrFile to fd 2...
	I1002 06:41:33.571966  295123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:33.572226  295123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:41:33.572671  295123 out.go:368] Setting JSON to false
	I1002 06:41:33.573513  295123 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5045,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 06:41:33.573582  295123 start.go:140] virtualization:  
	I1002 06:41:33.576912  295123 out.go:179] * [addons-067378] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:41:33.580601  295123 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:41:33.580662  295123 notify.go:220] Checking for updates...
	I1002 06:41:33.586457  295123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:41:33.589395  295123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 06:41:33.592355  295123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 06:41:33.595220  295123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 06:41:33.598064  295123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:41:33.601212  295123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:41:33.628509  295123 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:41:33.628639  295123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:41:33.683589  295123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:41:33.674283519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:41:33.683695  295123 docker.go:318] overlay module found
	I1002 06:41:33.688759  295123 out.go:179] * Using the docker driver based on user configuration
	I1002 06:41:33.691744  295123 start.go:304] selected driver: docker
	I1002 06:41:33.691767  295123 start.go:924] validating driver "docker" against <nil>
	I1002 06:41:33.691781  295123 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:41:33.692497  295123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:41:33.747437  295123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:41:33.737895417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:41:33.747596  295123 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:41:33.747835  295123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:41:33.750873  295123 out.go:179] * Using Docker driver with root privileges
	I1002 06:41:33.753663  295123 cni.go:84] Creating CNI manager for ""
	I1002 06:41:33.753747  295123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:41:33.753762  295123 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:41:33.753845  295123 start.go:348] cluster config:
	{Name:addons-067378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 06:41:33.756921  295123 out.go:179] * Starting "addons-067378" primary control-plane node in "addons-067378" cluster
	I1002 06:41:33.759725  295123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:41:33.762728  295123 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:41:33.765574  295123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:41:33.765638  295123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 06:41:33.765654  295123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:41:33.765668  295123 cache.go:58] Caching tarball of preloaded images
	I1002 06:41:33.765762  295123 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 06:41:33.765773  295123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:41:33.766114  295123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/config.json ...
	I1002 06:41:33.766148  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/config.json: {Name:mka25b4481cb88cb84ea2a131c49da153455d30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:41:33.781708  295123 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:41:33.781838  295123 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:41:33.781857  295123 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 06:41:33.781862  295123 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 06:41:33.781869  295123 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 06:41:33.781875  295123 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 06:41:52.074232  295123 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 06:41:52.074286  295123 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:41:52.074317  295123 start.go:360] acquireMachinesLock for addons-067378: {Name:mk901da383b3ee543c55d3fb99cc36a665e7de29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:41:52.074441  295123 start.go:364] duration metric: took 98.355µs to acquireMachinesLock for "addons-067378"
	I1002 06:41:52.074473  295123 start.go:93] Provisioning new machine with config: &{Name:addons-067378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:41:52.074589  295123 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:41:52.078151  295123 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 06:41:52.078398  295123 start.go:159] libmachine.API.Create for "addons-067378" (driver="docker")
	I1002 06:41:52.078466  295123 client.go:168] LocalClient.Create starting
	I1002 06:41:52.078615  295123 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 06:41:52.870837  295123 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 06:41:53.195908  295123 cli_runner.go:164] Run: docker network inspect addons-067378 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:41:53.212894  295123 cli_runner.go:211] docker network inspect addons-067378 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:41:53.212975  295123 network_create.go:284] running [docker network inspect addons-067378] to gather additional debugging logs...
	I1002 06:41:53.213000  295123 cli_runner.go:164] Run: docker network inspect addons-067378
	W1002 06:41:53.229722  295123 cli_runner.go:211] docker network inspect addons-067378 returned with exit code 1
	I1002 06:41:53.229749  295123 network_create.go:287] error running [docker network inspect addons-067378]: docker network inspect addons-067378: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-067378 not found
	I1002 06:41:53.229774  295123 network_create.go:289] output of [docker network inspect addons-067378]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-067378 not found
	
	** /stderr **
	I1002 06:41:53.229899  295123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:41:53.245675  295123 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001940c70}
	I1002 06:41:53.245712  295123 network_create.go:124] attempt to create docker network addons-067378 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:41:53.245774  295123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-067378 addons-067378
	I1002 06:41:53.300325  295123 network_create.go:108] docker network addons-067378 192.168.49.0/24 created
	I1002 06:41:53.300358  295123 kic.go:121] calculated static IP "192.168.49.2" for the "addons-067378" container
	I1002 06:41:53.300453  295123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:41:53.316027  295123 cli_runner.go:164] Run: docker volume create addons-067378 --label name.minikube.sigs.k8s.io=addons-067378 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:41:53.333327  295123 oci.go:103] Successfully created a docker volume addons-067378
	I1002 06:41:53.333428  295123 cli_runner.go:164] Run: docker run --rm --name addons-067378-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-067378 --entrypoint /usr/bin/test -v addons-067378:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:41:55.605411  295123 cli_runner.go:217] Completed: docker run --rm --name addons-067378-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-067378 --entrypoint /usr/bin/test -v addons-067378:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.271942847s)
	I1002 06:41:55.605440  295123 oci.go:107] Successfully prepared a docker volume addons-067378
	I1002 06:41:55.605478  295123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:41:55.605500  295123 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:41:55.605563  295123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-067378:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:42:00.056251  295123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-067378:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.45062096s)
	I1002 06:42:00.056292  295123 kic.go:203] duration metric: took 4.45078723s to extract preloaded images to volume ...
	W1002 06:42:00.056471  295123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 06:42:00.056594  295123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:42:00.259211  295123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-067378 --name addons-067378 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-067378 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-067378 --network addons-067378 --ip 192.168.49.2 --volume addons-067378:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:42:00.654089  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Running}}
	I1002 06:42:00.671961  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:00.699021  295123 cli_runner.go:164] Run: docker exec addons-067378 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:42:00.752980  295123 oci.go:144] the created container "addons-067378" has a running status.
	I1002 06:42:00.753007  295123 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa...
	I1002 06:42:01.375233  295123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:42:01.396387  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:01.413611  295123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:42:01.413638  295123 kic_runner.go:114] Args: [docker exec --privileged addons-067378 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:42:01.454893  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:01.474916  295123 machine.go:93] provisionDockerMachine start ...
	I1002 06:42:01.475041  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:01.493222  295123 main.go:141] libmachine: Using SSH client type: native
	I1002 06:42:01.493561  295123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1002 06:42:01.493579  295123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:42:01.494268  295123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 06:42:04.627034  295123 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-067378
	
	I1002 06:42:04.627060  295123 ubuntu.go:182] provisioning hostname "addons-067378"
	I1002 06:42:04.627170  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:04.645529  295123 main.go:141] libmachine: Using SSH client type: native
	I1002 06:42:04.645835  295123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1002 06:42:04.645851  295123 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-067378 && echo "addons-067378" | sudo tee /etc/hostname
	I1002 06:42:04.784853  295123 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-067378
	
	I1002 06:42:04.784935  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:04.803551  295123 main.go:141] libmachine: Using SSH client type: native
	I1002 06:42:04.803864  295123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1002 06:42:04.803888  295123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-067378' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-067378/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-067378' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:42:04.935478  295123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:42:04.935507  295123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 06:42:04.935529  295123 ubuntu.go:190] setting up certificates
	I1002 06:42:04.935539  295123 provision.go:84] configureAuth start
	I1002 06:42:04.935622  295123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-067378
	I1002 06:42:04.953563  295123 provision.go:143] copyHostCerts
	I1002 06:42:04.953651  295123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 06:42:04.953782  295123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 06:42:04.953852  295123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 06:42:04.953906  295123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.addons-067378 san=[127.0.0.1 192.168.49.2 addons-067378 localhost minikube]
	I1002 06:42:05.273181  295123 provision.go:177] copyRemoteCerts
	I1002 06:42:05.273250  295123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:42:05.273290  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.290302  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:05.387062  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 06:42:05.405258  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:42:05.424206  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 06:42:05.441651  295123 provision.go:87] duration metric: took 506.082719ms to configureAuth
	I1002 06:42:05.441677  295123 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:42:05.441863  295123 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:42:05.441979  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.459041  295123 main.go:141] libmachine: Using SSH client type: native
	I1002 06:42:05.459386  295123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1002 06:42:05.459407  295123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:42:05.694399  295123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:42:05.694422  295123 machine.go:96] duration metric: took 4.219482672s to provisionDockerMachine
	I1002 06:42:05.694431  295123 client.go:171] duration metric: took 13.615957841s to LocalClient.Create
	I1002 06:42:05.694460  295123 start.go:167] duration metric: took 13.616049534s to libmachine.API.Create "addons-067378"
	I1002 06:42:05.694467  295123 start.go:293] postStartSetup for "addons-067378" (driver="docker")
	I1002 06:42:05.694476  295123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:42:05.694544  295123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:42:05.694584  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.712617  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:05.807179  295123 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:42:05.810498  295123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:42:05.810527  295123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:42:05.810539  295123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 06:42:05.810604  295123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 06:42:05.810634  295123 start.go:296] duration metric: took 116.161636ms for postStartSetup
	I1002 06:42:05.810945  295123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-067378
	I1002 06:42:05.827048  295123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/config.json ...
	I1002 06:42:05.827384  295123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:42:05.827439  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.850102  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:05.939470  295123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:42:05.943794  295123 start.go:128] duration metric: took 13.869189388s to createHost
	I1002 06:42:05.943819  295123 start.go:83] releasing machines lock for "addons-067378", held for 13.86936361s
	I1002 06:42:05.943895  295123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-067378
	I1002 06:42:05.959875  295123 ssh_runner.go:195] Run: cat /version.json
	I1002 06:42:05.959936  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.959953  295123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:42:05.960004  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.977737  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:05.977982  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:06.168759  295123 ssh_runner.go:195] Run: systemctl --version
	I1002 06:42:06.175160  295123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:42:06.211880  295123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:42:06.216259  295123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:42:06.216329  295123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:42:06.244853  295123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 06:42:06.244880  295123 start.go:495] detecting cgroup driver to use...
	I1002 06:42:06.244912  295123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 06:42:06.244969  295123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:42:06.261788  295123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:42:06.274723  295123 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:42:06.274794  295123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:42:06.292166  295123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:42:06.311239  295123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:42:06.428042  295123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:42:06.555921  295123 docker.go:234] disabling docker service ...
	I1002 06:42:06.556070  295123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:42:06.579610  295123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:42:06.593270  295123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:42:06.713103  295123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:42:06.834792  295123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:42:06.846934  295123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:42:06.860625  295123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:42:06.860694  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.869516  295123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 06:42:06.869580  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.878262  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.886964  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.895974  295123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:42:06.904281  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.913053  295123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.926162  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.934872  295123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:42:06.942524  295123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:42:06.949891  295123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:42:07.055223  295123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:42:07.182900  295123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:42:07.183031  295123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:42:07.186882  295123 start.go:563] Will wait 60s for crictl version
	I1002 06:42:07.186993  295123 ssh_runner.go:195] Run: which crictl
	I1002 06:42:07.190613  295123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:42:07.214674  295123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:42:07.214815  295123 ssh_runner.go:195] Run: crio --version
	I1002 06:42:07.245355  295123 ssh_runner.go:195] Run: crio --version
	I1002 06:42:07.278557  295123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:42:07.281426  295123 cli_runner.go:164] Run: docker network inspect addons-067378 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:42:07.297823  295123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:42:07.301844  295123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:42:07.311575  295123 kubeadm.go:883] updating cluster {Name:addons-067378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:42:07.311688  295123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:42:07.311743  295123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:42:07.347297  295123 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:42:07.347322  295123 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:42:07.347379  295123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:42:07.371554  295123 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:42:07.371577  295123 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:42:07.371585  295123 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:42:07.371720  295123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-067378 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:42:07.371808  295123 ssh_runner.go:195] Run: crio config
	I1002 06:42:07.427134  295123 cni.go:84] Creating CNI manager for ""
	I1002 06:42:07.427164  295123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:42:07.427181  295123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:42:07.427206  295123 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-067378 NodeName:addons-067378 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:42:07.427365  295123 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-067378"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:42:07.427443  295123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:42:07.435466  295123 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:42:07.435567  295123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:42:07.443580  295123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 06:42:07.456907  295123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:42:07.469855  295123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1002 06:42:07.482966  295123 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:42:07.486573  295123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:42:07.496542  295123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:42:07.603205  295123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:42:07.619768  295123 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378 for IP: 192.168.49.2
	I1002 06:42:07.619833  295123 certs.go:195] generating shared ca certs ...
	I1002 06:42:07.619877  295123 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:07.620048  295123 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 06:42:08.245253  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt ...
	I1002 06:42:08.245289  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt: {Name:mk8f52b922b701ca88ac15b4067ef5563f1025f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.246153  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key ...
	I1002 06:42:08.246172  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key: {Name:mk5b501a84195826066992c4a112a0a97eb1d5ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.246813  295123 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 06:42:08.451500  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt ...
	I1002 06:42:08.451531  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt: {Name:mkeba7e1f2385589bffb45ecff4ebd8abdca6a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.451705  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key ...
	I1002 06:42:08.451721  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key: {Name:mkb63062080aec405421dd75f400c3122397125a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.451811  295123 certs.go:257] generating profile certs ...
	I1002 06:42:08.451876  295123 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.key
	I1002 06:42:08.451893  295123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt with IP's: []
	I1002 06:42:08.866526  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt ...
	I1002 06:42:08.866557  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: {Name:mk6fd84a6d92953c0d2c0107b9c19fa02585ab28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.866748  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.key ...
	I1002 06:42:08.866763  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.key: {Name:mk5b1ab617eb5935fcb095e6c579d7151fcfa5ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.866844  295123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key.0a4f8341
	I1002 06:42:08.866863  295123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt.0a4f8341 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 06:42:10.194587  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt.0a4f8341 ...
	I1002 06:42:10.194621  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt.0a4f8341: {Name:mkd13c13b1c48ac3fa0b870434d6c8910e883aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:10.195472  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key.0a4f8341 ...
	I1002 06:42:10.195493  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key.0a4f8341: {Name:mk5d43f7a4741a8639b56f306d2bf3c5e007e199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:10.195586  295123 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt.0a4f8341 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt
	I1002 06:42:10.195672  295123 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key.0a4f8341 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key
	I1002 06:42:10.195730  295123 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.key
	I1002 06:42:10.195752  295123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.crt with IP's: []
	I1002 06:42:10.526719  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.crt ...
	I1002 06:42:10.526752  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.crt: {Name:mk1ef9672a854b186a4c97bb8db7ff752f395991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:10.526928  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.key ...
	I1002 06:42:10.526942  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.key: {Name:mk6c6f9e91f3733ab2c68da2aa81326c528adf88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:10.527147  295123 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:42:10.527191  295123 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 06:42:10.527219  295123 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:42:10.527245  295123 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 06:42:10.527823  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:42:10.546234  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:42:10.565267  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:42:10.582319  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:42:10.599661  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:42:10.617354  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 06:42:10.634895  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:42:10.656941  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 06:42:10.677798  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:42:10.696365  295123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:42:10.709728  295123 ssh_runner.go:195] Run: openssl version
	I1002 06:42:10.716265  295123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:42:10.724656  295123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:42:10.728252  295123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:42:10.728359  295123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:42:10.769473  295123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:42:10.777650  295123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:42:10.781245  295123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:42:10.781321  295123 kubeadm.go:400] StartCluster: {Name:addons-067378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:42:10.781436  295123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:42:10.781516  295123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:42:10.809513  295123 cri.go:89] found id: ""
	I1002 06:42:10.809681  295123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:42:10.818224  295123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:42:10.825976  295123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:42:10.826111  295123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:42:10.833761  295123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:42:10.833783  295123 kubeadm.go:157] found existing configuration files:
	
	I1002 06:42:10.833849  295123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:42:10.841656  295123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:42:10.841746  295123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:42:10.849035  295123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:42:10.857244  295123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:42:10.857345  295123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:42:10.864793  295123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:42:10.872819  295123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:42:10.872908  295123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:42:10.880591  295123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:42:10.888649  295123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:42:10.888738  295123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:42:10.896294  295123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:42:10.937444  295123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:42:10.937731  295123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:42:10.960699  295123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:42:10.960835  295123 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 06:42:10.960907  295123 kubeadm.go:318] OS: Linux
	I1002 06:42:10.960989  295123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:42:10.961078  295123 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 06:42:10.961168  295123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:42:10.961256  295123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:42:10.961342  295123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:42:10.961419  295123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:42:10.961487  295123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:42:10.961569  295123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:42:10.961646  295123 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 06:42:11.031703  295123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:42:11.031835  295123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:42:11.031939  295123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:42:11.039877  295123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:42:11.045630  295123 out.go:252]   - Generating certificates and keys ...
	I1002 06:42:11.045809  295123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:42:11.045932  295123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:42:11.505843  295123 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:42:12.400307  295123 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:42:13.735341  295123 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:42:15.178717  295123 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:42:15.453379  295123 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:42:15.453518  295123 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-067378 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:42:16.243114  295123 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:42:16.243271  295123 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-067378 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:42:16.521737  295123 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:42:16.813622  295123 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:42:17.175400  295123 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:42:17.175648  295123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:42:17.893731  295123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:42:18.337709  295123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:42:18.450908  295123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:42:19.075090  295123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:42:19.318399  295123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:42:19.319075  295123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:42:19.321873  295123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:42:19.325194  295123 out.go:252]   - Booting up control plane ...
	I1002 06:42:19.325316  295123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:42:19.325397  295123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:42:19.325466  295123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:42:19.340234  295123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:42:19.340349  295123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:42:19.347310  295123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:42:19.347700  295123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:42:19.347750  295123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:42:19.487005  295123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:42:19.487233  295123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:42:20.000724  295123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 516.677854ms
	I1002 06:42:20.005451  295123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:42:20.007171  295123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:42:20.007647  295123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:42:20.007742  295123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:42:22.504164  295123 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.493980592s
	I1002 06:42:24.111949  295123 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.104742169s
	I1002 06:42:26.010915  295123 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002965151s
	I1002 06:42:26.031823  295123 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:42:26.049405  295123 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:42:26.067365  295123 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:42:26.067601  295123 kubeadm.go:318] [mark-control-plane] Marking the node addons-067378 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:42:26.081283  295123 kubeadm.go:318] [bootstrap-token] Using token: 6muyxj.gpbfsrhp5ca1bx8q
	I1002 06:42:26.084410  295123 out.go:252]   - Configuring RBAC rules ...
	I1002 06:42:26.084559  295123 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:42:26.089435  295123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:42:26.101950  295123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:42:26.106702  295123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:42:26.112111  295123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:42:26.116331  295123 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:42:26.423848  295123 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:42:26.857816  295123 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:42:27.418105  295123 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:42:27.419527  295123 kubeadm.go:318] 
	I1002 06:42:27.419601  295123 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:42:27.419608  295123 kubeadm.go:318] 
	I1002 06:42:27.419689  295123 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:42:27.419693  295123 kubeadm.go:318] 
	I1002 06:42:27.419719  295123 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:42:27.419781  295123 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:42:27.419838  295123 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:42:27.419844  295123 kubeadm.go:318] 
	I1002 06:42:27.419901  295123 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:42:27.419905  295123 kubeadm.go:318] 
	I1002 06:42:27.419955  295123 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:42:27.419960  295123 kubeadm.go:318] 
	I1002 06:42:27.420015  295123 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:42:27.420093  295123 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:42:27.420165  295123 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:42:27.420169  295123 kubeadm.go:318] 
	I1002 06:42:27.420258  295123 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:42:27.420352  295123 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:42:27.420358  295123 kubeadm.go:318] 
	I1002 06:42:27.420446  295123 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 6muyxj.gpbfsrhp5ca1bx8q \
	I1002 06:42:27.420554  295123 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 06:42:27.420575  295123 kubeadm.go:318] 	--control-plane 
	I1002 06:42:27.420579  295123 kubeadm.go:318] 
	I1002 06:42:27.420668  295123 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:42:27.420672  295123 kubeadm.go:318] 
	I1002 06:42:27.420758  295123 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 6muyxj.gpbfsrhp5ca1bx8q \
	I1002 06:42:27.420865  295123 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 06:42:27.423162  295123 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 06:42:27.423394  295123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 06:42:27.423514  295123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:42:27.423537  295123 cni.go:84] Creating CNI manager for ""
	I1002 06:42:27.423545  295123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:42:27.426680  295123 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 06:42:27.429557  295123 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 06:42:27.433589  295123 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 06:42:27.433623  295123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 06:42:27.447321  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 06:42:27.729441  295123 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:42:27.729636  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:27.729762  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-067378 minikube.k8s.io/updated_at=2025_10_02T06_42_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-067378 minikube.k8s.io/primary=true
	I1002 06:42:27.914369  295123 ops.go:34] apiserver oom_adj: -16
	I1002 06:42:27.914555  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:28.415608  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:28.915214  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:29.414599  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:29.914791  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:30.414792  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:30.914956  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:31.415266  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:31.914588  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:32.042597  295123 kubeadm.go:1113] duration metric: took 4.313024113s to wait for elevateKubeSystemPrivileges
	I1002 06:42:32.042632  295123 kubeadm.go:402] duration metric: took 21.261338682s to StartCluster
	I1002 06:42:32.042650  295123 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:32.043508  295123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 06:42:32.043928  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:32.044125  295123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:42:32.044156  295123 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:42:32.044394  295123 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:42:32.044432  295123 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:42:32.044512  295123 addons.go:69] Setting yakd=true in profile "addons-067378"
	I1002 06:42:32.044527  295123 addons.go:238] Setting addon yakd=true in "addons-067378"
	I1002 06:42:32.044548  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.045009  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.045356  295123 addons.go:69] Setting inspektor-gadget=true in profile "addons-067378"
	I1002 06:42:32.045381  295123 addons.go:238] Setting addon inspektor-gadget=true in "addons-067378"
	I1002 06:42:32.045416  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.045822  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.046178  295123 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-067378"
	I1002 06:42:32.046199  295123 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-067378"
	I1002 06:42:32.046227  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.046627  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.048388  295123 addons.go:69] Setting metrics-server=true in profile "addons-067378"
	I1002 06:42:32.051480  295123 addons.go:238] Setting addon metrics-server=true in "addons-067378"
	I1002 06:42:32.051537  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.052072  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.054883  295123 addons.go:69] Setting cloud-spanner=true in profile "addons-067378"
	I1002 06:42:32.054910  295123 addons.go:238] Setting addon cloud-spanner=true in "addons-067378"
	I1002 06:42:32.054946  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.055480  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.050996  295123 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-067378"
	I1002 06:42:32.062731  295123 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-067378"
	I1002 06:42:32.062779  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.051014  295123 addons.go:69] Setting registry=true in profile "addons-067378"
	I1002 06:42:32.063232  295123 addons.go:238] Setting addon registry=true in "addons-067378"
	I1002 06:42:32.063263  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.063658  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.066248  295123 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-067378"
	I1002 06:42:32.066410  295123 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-067378"
	I1002 06:42:32.066488  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.067253  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.051022  295123 addons.go:69] Setting registry-creds=true in profile "addons-067378"
	I1002 06:42:32.070286  295123 addons.go:238] Setting addon registry-creds=true in "addons-067378"
	I1002 06:42:32.070324  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.051028  295123 addons.go:69] Setting storage-provisioner=true in profile "addons-067378"
	I1002 06:42:32.073533  295123 addons.go:238] Setting addon storage-provisioner=true in "addons-067378"
	I1002 06:42:32.073571  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.074051  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.089726  295123 addons.go:69] Setting default-storageclass=true in profile "addons-067378"
	I1002 06:42:32.089756  295123 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-067378"
	I1002 06:42:32.090098  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.051132  295123 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-067378"
	I1002 06:42:32.098599  295123 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-067378"
	I1002 06:42:32.098954  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.111606  295123 addons.go:69] Setting gcp-auth=true in profile "addons-067378"
	I1002 06:42:32.111640  295123 mustload.go:65] Loading cluster: addons-067378
	I1002 06:42:32.111917  295123 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:42:32.112296  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.051138  295123 addons.go:69] Setting volcano=true in profile "addons-067378"
	I1002 06:42:32.115583  295123 addons.go:238] Setting addon volcano=true in "addons-067378"
	I1002 06:42:32.115642  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.116111  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.129337  295123 addons.go:69] Setting ingress=true in profile "addons-067378"
	I1002 06:42:32.129370  295123 addons.go:238] Setting addon ingress=true in "addons-067378"
	I1002 06:42:32.129420  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.129917  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.145066  295123 addons.go:69] Setting ingress-dns=true in profile "addons-067378"
	I1002 06:42:32.145106  295123 addons.go:238] Setting addon ingress-dns=true in "addons-067378"
	I1002 06:42:32.145148  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.145629  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.051161  295123 addons.go:69] Setting volumesnapshots=true in profile "addons-067378"
	I1002 06:42:32.152587  295123 addons.go:238] Setting addon volumesnapshots=true in "addons-067378"
	I1002 06:42:32.152628  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.051458  295123 out.go:179] * Verifying Kubernetes components...
	I1002 06:42:32.153211  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.217649  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.284453  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.305893  295123 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:42:32.318053  295123 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:42:32.322896  295123 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:42:32.328865  295123 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:42:32.328990  295123 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:42:32.329255  295123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:42:32.331305  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:42:32.335827  295123 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:42:32.335899  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.341150  295123 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:42:32.341425  295123 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:42:32.341446  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:42:32.341507  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.333671  295123 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:42:32.353857  295123 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:42:32.353946  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.356721  295123 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1002 06:42:32.357788  295123 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 06:42:32.358155  295123 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:42:32.358211  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:42:32.358309  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.335366  295123 addons.go:238] Setting addon default-storageclass=true in "addons-067378"
	I1002 06:42:32.335478  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.365277  295123 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:42:32.365302  295123 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:42:32.365367  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.402048  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:42:32.407283  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:42:32.411657  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:42:32.417475  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:42:32.417803  295123 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:42:32.417821  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:42:32.417882  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.418642  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.419102  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.431390  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:42:32.436140  295123 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:42:32.438495  295123 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:42:32.440149  295123 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:42:32.440307  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:42:32.441481  295123 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-067378"
	I1002 06:42:32.441525  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.441958  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.456716  295123 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:42:32.456749  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:42:32.456862  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.481402  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:42:32.486692  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:42:32.492045  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:42:32.492072  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:42:32.492146  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.509482  295123 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:42:32.509507  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:42:32.509576  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.519510  295123 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:42:32.522653  295123 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:42:32.527556  295123 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:42:32.529070  295123 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:42:32.529087  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:42:32.529155  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.535411  295123 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:42:32.535461  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:42:32.535647  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.551039  295123 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:42:32.554439  295123 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:42:32.554463  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:42:32.554529  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.557798  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.586753  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.587361  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.588408  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:42:32.591561  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:42:32.591582  295123 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:42:32.591642  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.600805  295123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:42:32.611315  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.612473  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.640279  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.645381  295123 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:42:32.648981  295123 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:42:32.649002  295123 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:42:32.649073  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.663322  295123 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:42:32.666799  295123 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:42:32.666822  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:42:32.666885  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.715406  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.727520  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.731866  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.760873  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.762069  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.771429  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	W1002 06:42:32.780828  295123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:42:32.781092  295123 retry.go:31] will retry after 252.885938ms: ssh: handshake failed: EOF
	I1002 06:42:32.795856  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.795997  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	W1002 06:42:32.800806  295123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:42:32.800835  295123 retry.go:31] will retry after 151.501199ms: ssh: handshake failed: EOF
	W1002 06:42:32.801220  295123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:42:32.801234  295123 retry.go:31] will retry after 337.139948ms: ssh: handshake failed: EOF
	I1002 06:42:32.802092  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.878209  295123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:42:33.158378  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:42:33.398660  295123 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:42:33.398724  295123 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:42:33.435775  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:42:33.435850  295123 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:42:33.442891  295123 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:42:33.442916  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:42:33.460030  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:42:33.464445  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:42:33.501703  295123 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:42:33.501734  295123 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:42:33.510484  295123 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:42:33.510512  295123 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:42:33.572556  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:42:33.592358  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:42:33.630697  295123 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:42:33.630734  295123 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:42:33.656926  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:42:33.660619  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:42:33.660653  295123 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:42:33.667640  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:42:33.685854  295123 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:42:33.685880  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:42:33.722572  295123 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:33.722597  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:42:33.730176  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:42:33.730220  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:42:33.739401  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:42:33.794269  295123 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:42:33.794303  295123 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:42:33.833330  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:42:33.838806  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:33.857981  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:42:33.858014  295123 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:42:33.868764  295123 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:42:33.868797  295123 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:42:33.887799  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:42:33.887827  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:42:33.931924  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:42:34.017288  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:42:34.017333  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:42:34.020695  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:42:34.028828  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:42:34.028865  295123 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:42:34.121362  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:42:34.121405  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:42:34.169436  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:42:34.224918  295123 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:42:34.224942  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:42:34.269460  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:42:34.269503  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:42:34.361719  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:42:34.441417  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:42:34.441452  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:42:34.504071  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.345661127s)
	I1002 06:42:34.504128  295123 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.625765233s)
	I1002 06:42:34.504195  295123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.90336598s)
	I1002 06:42:34.504212  295123 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 06:42:34.505008  295123 node_ready.go:35] waiting up to 6m0s for node "addons-067378" to be "Ready" ...
	I1002 06:42:34.714164  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:42:34.714193  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 06:42:34.909666  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:42:34.909688  295123 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:42:35.009973  295123 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-067378" context rescaled to 1 replicas
	I1002 06:42:35.051177  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:42:35.051200  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:42:35.233925  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:42:35.233953  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:42:35.436746  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:42:35.436773  295123 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:42:35.661704  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:42:36.313343  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.853273797s)
	I1002 06:42:36.313420  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.848951786s)
	I1002 06:42:36.313446  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.74087196s)
	W1002 06:42:36.509129  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:36.924065  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.331668218s)
	I1002 06:42:36.924269  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.267308621s)
	I1002 06:42:36.924304  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.256645339s)
	I1002 06:42:37.049269  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.309831751s)
	I1002 06:42:37.049356  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.216001293s)
	I1002 06:42:37.049373  295123 addons.go:479] Verifying addon registry=true in "addons-067378"
	I1002 06:42:37.053079  295123 out.go:179] * Verifying registry addon...
	I1002 06:42:37.056741  295123 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:42:37.109029  295123 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:42:37.109049  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.212257  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.373411174s)
	W1002 06:42:37.212289  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:37.212311  295123 retry.go:31] will retry after 342.5556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:37.555710  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:37.647765  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.077073  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:42:38.526399  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:38.559700  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.627696679s)
	I1002 06:42:38.559734  295123 addons.go:479] Verifying addon ingress=true in "addons-067378"
	I1002 06:42:38.560051  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.539320893s)
	I1002 06:42:38.560074  295123 addons.go:479] Verifying addon metrics-server=true in "addons-067378"
	I1002 06:42:38.560210  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.198450988s)
	W1002 06:42:38.560239  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:42:38.560256  295123 retry.go:31] will retry after 222.959417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:42:38.560372  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.390656007s)
	I1002 06:42:38.563230  295123 out.go:179] * Verifying ingress addon...
	I1002 06:42:38.565276  295123 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-067378 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:42:38.567860  295123 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:42:38.574149  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.574581  295123 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:42:38.574627  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:38.784210  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:42:39.051551  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.389750517s)
	I1002 06:42:39.051738  295123 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-067378"
	I1002 06:42:39.051694  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.495892999s)
	W1002 06:42:39.051817  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:39.051856  295123 retry.go:31] will retry after 398.626076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:39.055437  295123 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:42:39.059303  295123 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:42:39.086636  295123 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:42:39.086707  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:39.087180  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:39.092780  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.451007  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:39.564290  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.564909  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:39.571305  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:40.052927  295123 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:42:40.053084  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:40.069804  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.069876  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:40.086205  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:40.091006  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:40.213741  295123 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:42:40.230739  295123 addons.go:238] Setting addon gcp-auth=true in "addons-067378"
	I1002 06:42:40.230791  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:40.231261  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:40.251934  295123 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:42:40.251993  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:40.290756  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	W1002 06:42:40.387242  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:40.387270  295123 retry.go:31] will retry after 574.738578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:40.391514  295123 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:42:40.394306  295123 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:42:40.397096  295123 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:42:40.397115  295123 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:42:40.410446  295123 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:42:40.410471  295123 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:42:40.423035  295123 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:42:40.423056  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:42:40.436365  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:42:40.562273  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.564681  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:40.572671  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:40.917818  295123 addons.go:479] Verifying addon gcp-auth=true in "addons-067378"
	I1002 06:42:40.920784  295123 out.go:179] * Verifying gcp-auth addon...
	I1002 06:42:40.924486  295123 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:42:40.928599  295123 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:42:40.928667  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:40.962756  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:42:41.008187  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:41.073467  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.073680  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:41.078183  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:41.428650  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:41.560662  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.563286  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:41.571376  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:42:41.779246  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:41.779279  295123 retry.go:31] will retry after 957.626623ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:41.928146  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:42.060368  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.063296  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:42.077146  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:42.429281  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:42.560699  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.564643  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:42.571269  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:42.737716  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:42.928423  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:43.010929  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:43.060062  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.062509  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:43.071984  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:43.428095  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:43.543988  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:43.544071  295123 retry.go:31] will retry after 746.606443ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:43.562301  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:43.562834  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.571996  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:43.928274  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:44.060936  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.064415  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:44.071491  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:44.291843  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:44.427582  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:44.566495  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.566989  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:44.572284  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:44.928703  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:45.024235  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:45.061816  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.065355  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:45.099768  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:45.351808  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.059915661s)
	W1002 06:42:45.351924  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:45.351979  295123 retry.go:31] will retry after 1.77210152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:45.433371  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:45.560417  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.563394  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:45.572185  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:45.928347  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:46.061937  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.063034  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:46.072508  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:46.427775  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:46.560918  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.562864  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:46.570722  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:46.927285  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:47.060666  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.062901  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:47.071908  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:47.124991  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:47.428767  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:47.509184  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:47.566638  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.569510  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:47.571568  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:42:47.918869  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:47.918912  295123 retry.go:31] will retry after 1.841110372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:47.927945  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:48.060381  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.063710  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:48.076876  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:48.428121  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:48.560242  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.561907  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:48.570697  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:48.928266  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:49.059559  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.062708  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:49.072233  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:49.428283  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:49.560727  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.562603  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:49.571576  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:49.761057  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:49.927528  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:50.013054  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:50.062146  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.064287  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:50.071946  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:50.428159  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:50.562035  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.563971  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:50.571207  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:42:50.585648  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:50.585683  295123 retry.go:31] will retry after 5.495287107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:50.928305  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:51.060341  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.062472  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:51.073209  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:51.427688  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:51.560089  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.562293  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:51.571525  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:51.927833  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:52.020845  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:52.060126  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.062689  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:52.071673  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:52.428080  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:52.561743  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.563823  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:52.570668  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:52.928177  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:53.060011  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.061773  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:53.071673  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:53.428134  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:53.561981  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.562982  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:53.571117  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:53.928068  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:54.059766  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.061995  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:54.071727  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:54.427359  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:54.508229  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:54.562342  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.564089  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:54.571065  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:54.928805  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:55.060563  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.062902  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:55.076558  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:55.427804  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:55.560940  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.563409  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:55.571595  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:55.928759  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:56.060942  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.062727  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:56.076238  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:56.081130  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:56.427609  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:56.509252  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:56.564725  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.565340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:56.571679  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:42:56.874526  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:56.874563  295123 retry.go:31] will retry after 5.014714007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:56.928055  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:57.059855  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.061813  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:57.071964  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:57.428564  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:57.561972  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.562556  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:57.574399  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:57.928079  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:58.061184  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.063164  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:58.075651  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:58.427318  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:58.560966  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.562728  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:58.571579  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:58.928425  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:59.010352  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:59.060875  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.063153  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:59.072224  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:59.427495  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:59.560555  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.562513  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:59.571618  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:59.927929  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:00.088117  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.088924  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:00.091832  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:00.429060  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:00.561808  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.563592  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:00.572420  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:00.927938  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:01.059984  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.061921  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:01.071575  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:01.427638  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:01.508149  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:01.560582  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.563203  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:01.571221  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:01.889528  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:43:01.928533  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:02.060586  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.063814  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:02.072836  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:02.428751  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:02.563786  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.565116  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:02.571027  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:43:02.758459  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:02.758496  295123 retry.go:31] will retry after 8.883761034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:02.927859  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:03.059726  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.062243  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:03.071707  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:03.428033  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:03.510944  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:03.560556  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.562610  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:03.572373  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:03.927894  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:04.060931  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.062872  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:04.070999  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:04.428283  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:04.561580  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.562902  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:04.571662  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:04.928827  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:05.059887  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.062124  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:05.071873  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:05.428009  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:05.561304  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.563686  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:05.571674  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:05.928227  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:06.010543  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:06.061466  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.062947  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:06.072581  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:06.427354  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:06.560924  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.563386  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:06.571040  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:06.927831  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:07.060081  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.062212  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:07.071262  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:07.428192  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:07.560752  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.562844  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:07.571659  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:07.927980  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:08.010858  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:08.060080  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.062301  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:08.076371  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:08.428309  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:08.562058  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.562328  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:08.571059  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:08.927806  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:09.059822  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.062756  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:09.072327  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:09.428457  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:09.562087  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:09.562296  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.571337  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:09.931470  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:10.017317  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:10.061683  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.062895  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:10.072321  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:10.428530  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:10.560309  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.562540  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:10.571473  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:10.927401  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:11.060754  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.062699  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:11.075770  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:11.427735  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:11.561353  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.563226  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:11.570962  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:11.643343  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:43:11.927864  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:12.062674  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.065240  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:12.071717  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:12.427869  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:12.455208  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:12.455243  295123 retry.go:31] will retry after 18.122148078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:43:12.508095  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:12.560935  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.563425  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:12.571042  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:12.928123  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:13.059876  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.062047  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:13.071629  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:13.430177  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:13.523521  295123 node_ready.go:49] node "addons-067378" is "Ready"
	I1002 06:43:13.523552  295123 node_ready.go:38] duration metric: took 39.018514975s for node "addons-067378" to be "Ready" ...
	I1002 06:43:13.523567  295123 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:43:13.523628  295123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:43:13.548592  295123 api_server.go:72] duration metric: took 41.504408642s to wait for apiserver process to appear ...
	I1002 06:43:13.548620  295123 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:43:13.548640  295123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 06:43:13.578362  295123 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 06:43:13.589679  295123 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:43:13.589705  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:13.589837  295123 api_server.go:141] control plane version: v1.34.1
	I1002 06:43:13.589863  295123 api_server.go:131] duration metric: took 41.236606ms to wait for apiserver health ...
	I1002 06:43:13.589872  295123 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:43:13.590150  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:13.590221  295123 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:43:13.590234  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.696622  295123 system_pods.go:59] 19 kube-system pods found
	I1002 06:43:13.696658  295123 system_pods.go:61] "coredns-66bc5c9577-hqkgq" [842b83a7-7c09-4912-b9be-4ecce88ce7ca] Pending
	I1002 06:43:13.696668  295123 system_pods.go:61] "csi-hostpath-attacher-0" [10e37445-7bbb-44bc-9359-12524f894f88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:43:13.696673  295123 system_pods.go:61] "csi-hostpath-resizer-0" [863579f2-ece1-46c1-8f65-cdc2f410a1ab] Pending
	I1002 06:43:13.696679  295123 system_pods.go:61] "csi-hostpathplugin-g5rfp" [4dcebe4e-2c41-4731-a568-c47ea66b900d] Pending
	I1002 06:43:13.696684  295123 system_pods.go:61] "etcd-addons-067378" [0b35790c-32b5-4476-8519-d49ae2cf6f68] Running
	I1002 06:43:13.696688  295123 system_pods.go:61] "kindnet-rvljv" [3c704515-6f3d-45d5-a055-39afc813eeb5] Running
	I1002 06:43:13.696693  295123 system_pods.go:61] "kube-apiserver-addons-067378" [00be11a7-5cb7-4a64-8584-0d45b9b8057f] Running
	I1002 06:43:13.696698  295123 system_pods.go:61] "kube-controller-manager-addons-067378" [8450cb9e-1281-47df-964c-6ce56c609204] Running
	I1002 06:43:13.696704  295123 system_pods.go:61] "kube-ingress-dns-minikube" [57c3c67f-d7c1-4538-bb00-1a8cee5bee92] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:43:13.696713  295123 system_pods.go:61] "kube-proxy-glkj6" [245ca456-f1cb-4de2-bb7c-9cc322f5ab9d] Running
	I1002 06:43:13.696718  295123 system_pods.go:61] "kube-scheduler-addons-067378" [faf63f65-ae11-4b01-b3d3-6d71a1ad21ef] Running
	I1002 06:43:13.696730  295123 system_pods.go:61] "metrics-server-85b7d694d7-6x654" [0118f095-2060-4680-b4c9-c2c78976dda1] Pending
	I1002 06:43:13.696735  295123 system_pods.go:61] "nvidia-device-plugin-daemonset-kjxmr" [2391a5b9-29ae-4cd1-83fe-07aca873c5d1] Pending
	I1002 06:43:13.696740  295123 system_pods.go:61] "registry-66898fdd98-w2szx" [b634a53f-990a-4739-a9b3-2cf22c99e147] Pending
	I1002 06:43:13.696744  295123 system_pods.go:61] "registry-creds-764b6fb674-j77fn" [62c7e651-a525-434a-b3a2-67917ea0034f] Pending
	I1002 06:43:13.696754  295123 system_pods.go:61] "registry-proxy-zrq82" [76bc889e-53d2-4b4b-89a1-527536fef260] Pending
	I1002 06:43:13.696760  295123 system_pods.go:61] "snapshot-controller-7d9fbc56b8-57t4l" [564a4d92-8a32-4efe-917b-69afe2ecffa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:13.696767  295123 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vvfqw" [ec961b39-c695-47f2-bcfd-9196e9e451a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:13.696777  295123 system_pods.go:61] "storage-provisioner" [0b1f3ab3-a366-4164-97c6-d59947371157] Pending
	I1002 06:43:13.696784  295123 system_pods.go:74] duration metric: took 106.90342ms to wait for pod list to return data ...
	I1002 06:43:13.696792  295123 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:43:13.724220  295123 default_sa.go:45] found service account: "default"
	I1002 06:43:13.724252  295123 default_sa.go:55] duration metric: took 27.44824ms for default service account to be created ...
	I1002 06:43:13.724265  295123 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:43:13.749400  295123 system_pods.go:86] 19 kube-system pods found
	I1002 06:43:13.749436  295123 system_pods.go:89] "coredns-66bc5c9577-hqkgq" [842b83a7-7c09-4912-b9be-4ecce88ce7ca] Pending
	I1002 06:43:13.749448  295123 system_pods.go:89] "csi-hostpath-attacher-0" [10e37445-7bbb-44bc-9359-12524f894f88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:43:13.749454  295123 system_pods.go:89] "csi-hostpath-resizer-0" [863579f2-ece1-46c1-8f65-cdc2f410a1ab] Pending
	I1002 06:43:13.749459  295123 system_pods.go:89] "csi-hostpathplugin-g5rfp" [4dcebe4e-2c41-4731-a568-c47ea66b900d] Pending
	I1002 06:43:13.749463  295123 system_pods.go:89] "etcd-addons-067378" [0b35790c-32b5-4476-8519-d49ae2cf6f68] Running
	I1002 06:43:13.749467  295123 system_pods.go:89] "kindnet-rvljv" [3c704515-6f3d-45d5-a055-39afc813eeb5] Running
	I1002 06:43:13.749472  295123 system_pods.go:89] "kube-apiserver-addons-067378" [00be11a7-5cb7-4a64-8584-0d45b9b8057f] Running
	I1002 06:43:13.749476  295123 system_pods.go:89] "kube-controller-manager-addons-067378" [8450cb9e-1281-47df-964c-6ce56c609204] Running
	I1002 06:43:13.749487  295123 system_pods.go:89] "kube-ingress-dns-minikube" [57c3c67f-d7c1-4538-bb00-1a8cee5bee92] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:43:13.749495  295123 system_pods.go:89] "kube-proxy-glkj6" [245ca456-f1cb-4de2-bb7c-9cc322f5ab9d] Running
	I1002 06:43:13.749500  295123 system_pods.go:89] "kube-scheduler-addons-067378" [faf63f65-ae11-4b01-b3d3-6d71a1ad21ef] Running
	I1002 06:43:13.749506  295123 system_pods.go:89] "metrics-server-85b7d694d7-6x654" [0118f095-2060-4680-b4c9-c2c78976dda1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:43:13.749517  295123 system_pods.go:89] "nvidia-device-plugin-daemonset-kjxmr" [2391a5b9-29ae-4cd1-83fe-07aca873c5d1] Pending
	I1002 06:43:13.749521  295123 system_pods.go:89] "registry-66898fdd98-w2szx" [b634a53f-990a-4739-a9b3-2cf22c99e147] Pending
	I1002 06:43:13.749525  295123 system_pods.go:89] "registry-creds-764b6fb674-j77fn" [62c7e651-a525-434a-b3a2-67917ea0034f] Pending
	I1002 06:43:13.749536  295123 system_pods.go:89] "registry-proxy-zrq82" [76bc889e-53d2-4b4b-89a1-527536fef260] Pending
	I1002 06:43:13.749542  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57t4l" [564a4d92-8a32-4efe-917b-69afe2ecffa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:13.749549  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vvfqw" [ec961b39-c695-47f2-bcfd-9196e9e451a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:13.749559  295123 system_pods.go:89] "storage-provisioner" [0b1f3ab3-a366-4164-97c6-d59947371157] Pending
	I1002 06:43:13.749584  295123 retry.go:31] will retry after 241.662189ms: missing components: kube-dns
	I1002 06:43:13.950766  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:14.022829  295123 system_pods.go:86] 19 kube-system pods found
	I1002 06:43:14.022870  295123 system_pods.go:89] "coredns-66bc5c9577-hqkgq" [842b83a7-7c09-4912-b9be-4ecce88ce7ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:43:14.022879  295123 system_pods.go:89] "csi-hostpath-attacher-0" [10e37445-7bbb-44bc-9359-12524f894f88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:43:14.022885  295123 system_pods.go:89] "csi-hostpath-resizer-0" [863579f2-ece1-46c1-8f65-cdc2f410a1ab] Pending
	I1002 06:43:14.022893  295123 system_pods.go:89] "csi-hostpathplugin-g5rfp" [4dcebe4e-2c41-4731-a568-c47ea66b900d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:43:14.022898  295123 system_pods.go:89] "etcd-addons-067378" [0b35790c-32b5-4476-8519-d49ae2cf6f68] Running
	I1002 06:43:14.022903  295123 system_pods.go:89] "kindnet-rvljv" [3c704515-6f3d-45d5-a055-39afc813eeb5] Running
	I1002 06:43:14.022907  295123 system_pods.go:89] "kube-apiserver-addons-067378" [00be11a7-5cb7-4a64-8584-0d45b9b8057f] Running
	I1002 06:43:14.022911  295123 system_pods.go:89] "kube-controller-manager-addons-067378" [8450cb9e-1281-47df-964c-6ce56c609204] Running
	I1002 06:43:14.022918  295123 system_pods.go:89] "kube-ingress-dns-minikube" [57c3c67f-d7c1-4538-bb00-1a8cee5bee92] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:43:14.022922  295123 system_pods.go:89] "kube-proxy-glkj6" [245ca456-f1cb-4de2-bb7c-9cc322f5ab9d] Running
	I1002 06:43:14.022927  295123 system_pods.go:89] "kube-scheduler-addons-067378" [faf63f65-ae11-4b01-b3d3-6d71a1ad21ef] Running
	I1002 06:43:14.022934  295123 system_pods.go:89] "metrics-server-85b7d694d7-6x654" [0118f095-2060-4680-b4c9-c2c78976dda1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:43:14.022938  295123 system_pods.go:89] "nvidia-device-plugin-daemonset-kjxmr" [2391a5b9-29ae-4cd1-83fe-07aca873c5d1] Pending
	I1002 06:43:14.022945  295123 system_pods.go:89] "registry-66898fdd98-w2szx" [b634a53f-990a-4739-a9b3-2cf22c99e147] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:43:14.022954  295123 system_pods.go:89] "registry-creds-764b6fb674-j77fn" [62c7e651-a525-434a-b3a2-67917ea0034f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:43:14.022962  295123 system_pods.go:89] "registry-proxy-zrq82" [76bc889e-53d2-4b4b-89a1-527536fef260] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:43:14.022974  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57t4l" [564a4d92-8a32-4efe-917b-69afe2ecffa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:14.022980  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vvfqw" [ec961b39-c695-47f2-bcfd-9196e9e451a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:14.022985  295123 system_pods.go:89] "storage-provisioner" [0b1f3ab3-a366-4164-97c6-d59947371157] Pending
	I1002 06:43:14.023007  295123 retry.go:31] will retry after 298.767136ms: missing components: kube-dns
	I1002 06:43:14.097948  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:14.098391  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:14.100832  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.326750  295123 system_pods.go:86] 19 kube-system pods found
	I1002 06:43:14.326788  295123 system_pods.go:89] "coredns-66bc5c9577-hqkgq" [842b83a7-7c09-4912-b9be-4ecce88ce7ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:43:14.326803  295123 system_pods.go:89] "csi-hostpath-attacher-0" [10e37445-7bbb-44bc-9359-12524f894f88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:43:14.326813  295123 system_pods.go:89] "csi-hostpath-resizer-0" [863579f2-ece1-46c1-8f65-cdc2f410a1ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:43:14.326820  295123 system_pods.go:89] "csi-hostpathplugin-g5rfp" [4dcebe4e-2c41-4731-a568-c47ea66b900d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:43:14.326831  295123 system_pods.go:89] "etcd-addons-067378" [0b35790c-32b5-4476-8519-d49ae2cf6f68] Running
	I1002 06:43:14.326836  295123 system_pods.go:89] "kindnet-rvljv" [3c704515-6f3d-45d5-a055-39afc813eeb5] Running
	I1002 06:43:14.326843  295123 system_pods.go:89] "kube-apiserver-addons-067378" [00be11a7-5cb7-4a64-8584-0d45b9b8057f] Running
	I1002 06:43:14.326847  295123 system_pods.go:89] "kube-controller-manager-addons-067378" [8450cb9e-1281-47df-964c-6ce56c609204] Running
	I1002 06:43:14.326855  295123 system_pods.go:89] "kube-ingress-dns-minikube" [57c3c67f-d7c1-4538-bb00-1a8cee5bee92] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:43:14.326868  295123 system_pods.go:89] "kube-proxy-glkj6" [245ca456-f1cb-4de2-bb7c-9cc322f5ab9d] Running
	I1002 06:43:14.326873  295123 system_pods.go:89] "kube-scheduler-addons-067378" [faf63f65-ae11-4b01-b3d3-6d71a1ad21ef] Running
	I1002 06:43:14.326879  295123 system_pods.go:89] "metrics-server-85b7d694d7-6x654" [0118f095-2060-4680-b4c9-c2c78976dda1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:43:14.326886  295123 system_pods.go:89] "nvidia-device-plugin-daemonset-kjxmr" [2391a5b9-29ae-4cd1-83fe-07aca873c5d1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:43:14.326898  295123 system_pods.go:89] "registry-66898fdd98-w2szx" [b634a53f-990a-4739-a9b3-2cf22c99e147] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:43:14.326910  295123 system_pods.go:89] "registry-creds-764b6fb674-j77fn" [62c7e651-a525-434a-b3a2-67917ea0034f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:43:14.326915  295123 system_pods.go:89] "registry-proxy-zrq82" [76bc889e-53d2-4b4b-89a1-527536fef260] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:43:14.326922  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57t4l" [564a4d92-8a32-4efe-917b-69afe2ecffa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:14.326931  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vvfqw" [ec961b39-c695-47f2-bcfd-9196e9e451a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:14.326936  295123 system_pods.go:89] "storage-provisioner" [0b1f3ab3-a366-4164-97c6-d59947371157] Running
	I1002 06:43:14.326947  295123 system_pods.go:126] duration metric: took 602.676668ms to wait for k8s-apps to be running ...
	I1002 06:43:14.326958  295123 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:43:14.327011  295123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:43:14.345372  295123 system_svc.go:56] duration metric: took 18.404198ms WaitForService to wait for kubelet
	I1002 06:43:14.345404  295123 kubeadm.go:586] duration metric: took 42.301222029s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:43:14.345424  295123 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:43:14.348581  295123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 06:43:14.348614  295123 node_conditions.go:123] node cpu capacity is 2
	I1002 06:43:14.348627  295123 node_conditions.go:105] duration metric: took 3.197799ms to run NodePressure ...
	I1002 06:43:14.348640  295123 start.go:241] waiting for startup goroutines ...
	I1002 06:43:14.428294  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:14.566133  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.566601  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:14.574186  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:14.928429  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:15.069815  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:15.071054  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.076967  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:15.428467  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:15.568748  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:15.569195  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.584160  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:15.929630  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:16.060340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.064947  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:16.072002  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:16.428800  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:16.564726  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.565322  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:16.571587  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:16.927766  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:17.063701  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.066486  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:17.073628  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:17.428091  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:17.560897  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.563681  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:17.571596  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:17.927785  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:18.062473  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.064241  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:18.071834  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:18.428853  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:18.560427  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.564741  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:18.571734  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:18.927703  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:19.061640  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.063360  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:19.075431  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:19.428018  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:19.560332  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.563413  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:19.571310  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:19.927714  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:20.060725  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.062806  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:20.071641  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:20.428218  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:20.561053  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.562947  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:20.571622  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:20.928311  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:21.060494  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.063024  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:21.071275  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:21.429147  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:21.564007  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.564340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:21.571948  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:21.929078  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:22.064312  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:22.064890  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.071476  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:22.428285  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:22.562815  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.564885  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:22.571647  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:22.928147  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:23.062182  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:23.065226  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:23.071702  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:23.429253  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:23.566410  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:23.567677  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:23.572252  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:23.928812  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:24.060890  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:24.063330  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:24.071627  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:24.428579  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:24.565466  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:24.565597  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:24.571150  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:24.928338  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:25.060416  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:25.063142  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:25.072183  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:25.428614  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:25.562347  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:25.565185  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:25.572210  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:25.928038  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:26.061704  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:26.064142  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:26.071628  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:26.428147  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:26.561940  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:26.565251  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:26.571646  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:26.928656  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:27.060949  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:27.064860  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:27.071995  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:27.428289  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:27.563067  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:27.563293  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:27.571521  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:27.928649  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:28.061832  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:28.064947  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:28.071500  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:28.428445  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:28.565905  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:28.566319  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:28.574137  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:28.928404  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:29.061077  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:29.064326  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:29.071471  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:29.428510  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:29.566557  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:29.573966  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:29.575963  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:29.928182  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:30.063437  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:30.067384  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:30.096974  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:30.429013  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:30.565585  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:30.566027  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:30.572366  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:30.577584  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:43:30.928569  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:31.061618  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:31.064841  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:31.071795  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:31.428352  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:31.569126  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:31.569271  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:31.572359  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:31.630113  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.052486797s)
	W1002 06:43:31.630155  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:31.630173  295123 retry.go:31] will retry after 18.374199675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:31.928306  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:32.060715  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:32.063338  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:32.071586  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:32.428456  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:32.565733  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:32.566203  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:32.571027  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:32.927739  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:33.060386  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:33.062507  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:33.072136  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:33.428511  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:33.563345  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:33.566163  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:33.572546  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:33.927818  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:34.060684  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:34.063728  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:34.071460  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:34.428854  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:34.559933  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:34.562721  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:34.571862  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:34.928082  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:35.060900  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:35.064134  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:35.071324  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:35.428515  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:35.561221  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:35.564079  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:35.570832  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:35.927642  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:36.063209  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:36.063323  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:36.072210  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:36.435399  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:36.562960  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:36.563485  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:36.571262  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:36.928483  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:37.061187  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:37.064228  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:37.076792  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:37.428807  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:37.561341  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:37.566559  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:37.571476  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:37.928265  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:38.061217  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:38.064181  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:38.071664  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:38.428541  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:38.561477  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:38.563549  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:38.572027  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:38.928787  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:39.060917  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:39.065402  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:39.071222  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:39.428608  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:39.565935  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:39.566239  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:39.571237  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:39.927425  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:40.060993  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:40.064323  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:40.071861  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:40.428645  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:40.560591  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:40.564977  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:40.572454  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:40.927723  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:41.064301  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:41.065269  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:41.072737  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:41.429176  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:41.561075  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:41.564632  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:41.571842  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:41.928196  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:42.065590  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:42.066004  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:42.077105  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:42.433561  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:42.561238  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:42.564536  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:42.571255  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:42.927687  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:43.061692  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:43.064577  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:43.071915  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:43.428426  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:43.562304  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:43.564143  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:43.571345  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:43.927841  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:44.061679  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:44.064295  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:44.072070  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:44.428585  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:44.569924  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:44.572010  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:44.572997  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:44.928493  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:45.072484  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:45.073173  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:45.120383  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:45.429836  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:45.561518  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:45.567750  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:45.572017  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:45.928443  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:46.063402  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:46.063915  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:46.072121  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:46.428970  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:46.562343  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:46.563107  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:46.571654  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:46.929523  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:47.062061  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:47.065903  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:47.078976  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:47.429226  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:47.560842  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:47.564516  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:47.572897  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:47.929712  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:48.063749  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:48.067374  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:48.072824  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:48.428303  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:48.563434  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:48.563908  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:48.664426  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:48.927696  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:49.059948  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:49.063000  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:49.071896  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:49.433245  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:49.562659  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:49.563406  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:49.571612  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:49.927881  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:50.009926  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:43:50.061421  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:50.083710  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:50.084045  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:50.428627  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:50.564618  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:50.564872  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:50.578361  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:50.929159  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:51.061104  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:51.064167  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:51.071695  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:51.173203  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.16321785s)
	W1002 06:43:51.173288  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:51.173355  295123 retry.go:31] will retry after 34.424856834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:51.432500  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:51.562806  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:51.565537  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:51.572067  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:51.928647  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:52.065994  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:52.066426  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:52.073304  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:52.436933  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:52.569980  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:52.581236  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:52.583715  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:52.928226  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:53.061648  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:53.068400  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:53.071425  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:53.437280  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:53.561026  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:53.571409  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:53.576514  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:53.928244  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:54.061626  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:54.063976  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:54.071304  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:54.429661  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:54.569286  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:54.570048  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:54.575280  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:54.927258  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:55.061784  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:55.065564  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:55.072220  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:55.437566  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:55.567061  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:55.570340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:55.571671  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:55.928276  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:56.062247  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:56.063319  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:56.072312  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:56.431466  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:56.562942  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:56.567247  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:56.571634  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:56.928071  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:57.061516  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:57.065121  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:57.072553  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:57.428613  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:57.565176  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:57.573233  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:57.578266  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:57.927823  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:58.060621  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:58.062435  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:58.072066  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:58.428512  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:58.562497  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:58.563716  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:58.571727  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:58.928836  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:59.059867  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:59.062396  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:59.071817  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:59.427895  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:59.563108  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:59.563253  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:59.571359  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:59.927537  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:00.062259  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:00.101600  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:00.104924  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:00.438341  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:00.564000  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:00.564969  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:00.572762  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:00.928202  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:01.066066  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:01.066319  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:01.077364  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:01.431001  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:01.567892  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:01.568239  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:01.574357  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:01.928480  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:02.061473  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:02.064718  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:02.072585  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:02.428290  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:02.560953  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:02.564528  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:02.572149  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:02.929998  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:03.061920  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:03.063755  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:03.073996  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:03.428568  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:03.566832  295123 kapi.go:107] duration metric: took 1m26.510080858s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 06:44:03.567143  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:03.571512  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:03.928365  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:04.063020  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:04.071550  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:04.427879  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:04.564155  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:04.582538  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:04.928416  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:05.063279  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:05.071445  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:05.449430  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:05.564260  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:05.571469  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:05.927818  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:06.064385  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:06.077019  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:06.428254  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:06.562192  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:06.571855  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:06.928255  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:07.062445  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:07.072063  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:07.428273  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:07.564318  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:07.572438  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:07.927556  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:08.062699  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:08.071796  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:08.428207  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:08.563996  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:08.580684  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:08.928573  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:09.063515  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:09.082292  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:09.428659  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:09.564643  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:09.573491  295123 kapi.go:107] duration metric: took 1m31.005629359s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 06:44:09.927858  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:10.063583  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:10.428000  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:10.580376  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:10.928089  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:11.064340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:11.428337  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:11.565677  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:11.928143  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:12.064106  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:12.428399  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:12.563644  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:12.928281  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:13.066610  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:13.428276  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:13.566470  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:13.928829  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:14.064216  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:14.429315  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:14.563871  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:14.930850  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:15.064739  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:15.427835  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:15.562841  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:15.928347  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:16.066959  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:16.428435  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:16.563542  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:16.928100  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:17.064110  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:17.428398  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:17.564810  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:17.928542  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:18.064351  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:18.429938  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:18.562671  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:18.928333  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:19.062784  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:19.428162  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:19.562971  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:19.928478  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:20.063999  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:20.428804  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:20.567193  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:20.927631  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:21.081283  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:21.427786  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:21.564950  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:21.928607  295123 kapi.go:107] duration metric: took 1m41.004123767s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 06:44:21.930719  295123 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-067378 cluster.
	I1002 06:44:21.933534  295123 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 06:44:21.936551  295123 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 06:44:22.066203  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:22.566664  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:23.077012  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:23.563342  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:24.063700  295123 kapi.go:107] duration metric: took 1m45.004391753s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 06:44:25.599261  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:44:26.436237  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:44:26.436332  295123 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:44:26.439440  295123 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, cloud-spanner, default-storageclass, registry-creds, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 06:44:26.442317  295123 addons.go:514] duration metric: took 1m54.397863216s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns cloud-spanner default-storageclass registry-creds nvidia-device-plugin storage-provisioner-rancher storage-provisioner metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 06:44:26.442407  295123 start.go:246] waiting for cluster config update ...
	I1002 06:44:26.442451  295123 start.go:255] writing updated cluster config ...
	I1002 06:44:26.442818  295123 ssh_runner.go:195] Run: rm -f paused
	I1002 06:44:26.446776  295123 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:44:26.450947  295123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hqkgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.457422  295123 pod_ready.go:94] pod "coredns-66bc5c9577-hqkgq" is "Ready"
	I1002 06:44:26.457447  295123 pod_ready.go:86] duration metric: took 6.472654ms for pod "coredns-66bc5c9577-hqkgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.459887  295123 pod_ready.go:83] waiting for pod "etcd-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.464776  295123 pod_ready.go:94] pod "etcd-addons-067378" is "Ready"
	I1002 06:44:26.464801  295123 pod_ready.go:86] duration metric: took 4.886916ms for pod "etcd-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.468445  295123 pod_ready.go:83] waiting for pod "kube-apiserver-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.473496  295123 pod_ready.go:94] pod "kube-apiserver-addons-067378" is "Ready"
	I1002 06:44:26.473526  295123 pod_ready.go:86] duration metric: took 5.052094ms for pod "kube-apiserver-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.476093  295123 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.851778  295123 pod_ready.go:94] pod "kube-controller-manager-addons-067378" is "Ready"
	I1002 06:44:26.851826  295123 pod_ready.go:86] duration metric: took 375.688634ms for pod "kube-controller-manager-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:27.051578  295123 pod_ready.go:83] waiting for pod "kube-proxy-glkj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:27.451442  295123 pod_ready.go:94] pod "kube-proxy-glkj6" is "Ready"
	I1002 06:44:27.451469  295123 pod_ready.go:86] duration metric: took 399.863968ms for pod "kube-proxy-glkj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:27.650629  295123 pod_ready.go:83] waiting for pod "kube-scheduler-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:28.050907  295123 pod_ready.go:94] pod "kube-scheduler-addons-067378" is "Ready"
	I1002 06:44:28.050935  295123 pod_ready.go:86] duration metric: took 400.277387ms for pod "kube-scheduler-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:28.050949  295123 pod_ready.go:40] duration metric: took 1.60413685s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:44:28.110717  295123 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 06:44:28.115803  295123 out.go:179] * Done! kubectl is now configured to use "addons-067378" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 06:47:26 addons-067378 crio[828]: time="2025-10-02T06:47:26.414645097Z" level=info msg="Removed container df56b7e6ce9f017c37463c379216ef0f2cf0989a162ae20d6e8e0ad193478fc3: kube-system/registry-creds-764b6fb674-j77fn/registry-creds" id=50e5fb8a-d5c9-4b6e-ba8b-129bca2966df name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.137009578Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-whgpg/POD" id=8fa4db0d-0252-41c6-b10e-1294f9eec094 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.13708029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.153564762Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-whgpg Namespace:default ID:878f13cd49af91e07b74fa669da9a32e4e6f8fc7389aa08eb36e821fc98aab2b UID:03cc7a91-d490-4776-a388-07e17f194a23 NetNS:/var/run/netns/cd3bf279-1743-43e5-bd9e-966cd28b220b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b648c0}] Aliases:map[]}"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.153928064Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-whgpg to CNI network \"kindnet\" (type=ptp)"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.19106021Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-whgpg Namespace:default ID:878f13cd49af91e07b74fa669da9a32e4e6f8fc7389aa08eb36e821fc98aab2b UID:03cc7a91-d490-4776-a388-07e17f194a23 NetNS:/var/run/netns/cd3bf279-1743-43e5-bd9e-966cd28b220b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b648c0}] Aliases:map[]}"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.191419795Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-whgpg for CNI network kindnet (type=ptp)"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.199486427Z" level=info msg="Ran pod sandbox 878f13cd49af91e07b74fa669da9a32e4e6f8fc7389aa08eb36e821fc98aab2b with infra container: default/hello-world-app-5d498dc89-whgpg/POD" id=8fa4db0d-0252-41c6-b10e-1294f9eec094 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.200830785Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0f1b5a98-2fe6-4235-b6a5-9f2bfcffb6ea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.201102025Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=0f1b5a98-2fe6-4235-b6a5-9f2bfcffb6ea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.201218202Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=0f1b5a98-2fe6-4235-b6a5-9f2bfcffb6ea name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.202153531Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=87a1a45f-9a20-4514-8c8c-05694c94d1bf name=/runtime.v1.ImageService/PullImage
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.20500457Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.833710262Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=87a1a45f-9a20-4514-8c8c-05694c94d1bf name=/runtime.v1.ImageService/PullImage
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.834294497Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5c56da95-23c6-4863-bc81-2ee8eed24c90 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.836463159Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=17869f27-c97f-4a5d-8839-47841874526a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.842348297Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-whgpg/hello-world-app" id=171f09e4-7f66-41fa-bb4f-d2f7b5808620 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.843701549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.852616814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.85280258Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/be65ae8f769d5c180ea8cca431aaaa042b8674c5d3e963eea090081ea01b9719/merged/etc/passwd: no such file or directory"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.85282324Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/be65ae8f769d5c180ea8cca431aaaa042b8674c5d3e963eea090081ea01b9719/merged/etc/group: no such file or directory"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.853100273Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.870871765Z" level=info msg="Created container a3b051d9cb32224399cda9bcf26dad21944ac494d52a31f72dc70813298a1c6c: default/hello-world-app-5d498dc89-whgpg/hello-world-app" id=171f09e4-7f66-41fa-bb4f-d2f7b5808620 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.871822191Z" level=info msg="Starting container: a3b051d9cb32224399cda9bcf26dad21944ac494d52a31f72dc70813298a1c6c" id=525065c3-1d14-4719-9dc9-3171d4d5bba0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 06:47:27 addons-067378 crio[828]: time="2025-10-02T06:47:27.873848985Z" level=info msg="Started container" PID=7070 containerID=a3b051d9cb32224399cda9bcf26dad21944ac494d52a31f72dc70813298a1c6c description=default/hello-world-app-5d498dc89-whgpg/hello-world-app id=525065c3-1d14-4719-9dc9-3171d4d5bba0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=878f13cd49af91e07b74fa669da9a32e4e6f8fc7389aa08eb36e821fc98aab2b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	a3b051d9cb322       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   878f13cd49af9       hello-world-app-5d498dc89-whgpg            default
	5f5aa695c2040       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             3 seconds ago            Exited              registry-creds                           1                   58e49be6990eb       registry-creds-764b6fb674-j77fn            kube-system
	c2066f9b62f55       docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac                                              2 minutes ago            Running             nginx                                    0                   73d5633d87f6a       nginx                                      default
	7004a722b720c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   163a58ffcd5b3       busybox                                    default
	715051dd29f98       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	03a89e8a85aa8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   7f3aa1d4e528e       gcp-auth-78565c9fb4-sf7d5                  gcp-auth
	d72616d82a4c6       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	850c05bdc05e6       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	96695eb2b2b1c       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	8e159425d0843       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	f0e88be7831a3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            3 minutes ago            Running             gadget                                   0                   84f50fa31f325       gadget-bvpt5                               gadget
	855ae3081a142       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             3 minutes ago            Running             controller                               0                   275a88e59c85e       ingress-nginx-controller-9cc49f96f-jv8pp   ingress-nginx
	35286e26bd2b2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   b568908f00078       registry-proxy-zrq82                       kube-system
	38a84c1da31e3       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               3 minutes ago            Running             cloud-spanner-emulator                   0                   32bfaeb6e64fc       cloud-spanner-emulator-85f6b7fc65-nt86x    default
	3caf90b5c6d09       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           3 minutes ago            Running             registry                                 0                   4e724c3202c9d       registry-66898fdd98-w2szx                  kube-system
	6c102718e7f7f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   9d74966aa9792       metrics-server-85b7d694d7-6x654            kube-system
	8832f8099b85d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	69fbb8d36215a       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   8921485f2ce4d       csi-hostpath-resizer-0                     kube-system
	f0b36ca509d15       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   829467d1d5587       kube-ingress-dns-minikube                  kube-system
	e4e74e65e570a       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   63779bb51994a       csi-hostpath-attacher-0                    kube-system
	db9280fb3f8c3       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   0578acb842464       snapshot-controller-7d9fbc56b8-vvfqw       kube-system
	0cbf532af43dd       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   40d78e61eb145       snapshot-controller-7d9fbc56b8-57t4l       kube-system
	7a322d3dc58d8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   3 minutes ago            Exited              patch                                    0                   e69c408c3a50b       ingress-nginx-admission-patch-dqc9b        ingress-nginx
	1bc50c5a2a408       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   ae93b898b1aba       nvidia-device-plugin-daemonset-kjxmr       kube-system
	8319af6a35e19       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   4 minutes ago            Exited              create                                   0                   75efa1b378415       ingress-nginx-admission-create-sp78n       ingress-nginx
	b39b1b42acab5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   b344b92e7963d       local-path-provisioner-648f6765c9-mrnqw    local-path-storage
	35efe6f5a1350       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   28c7544b1ab82       yakd-dashboard-5ff678cb9-x6zz2             yakd-dashboard
	23849ffb383b4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   b20e4d911d382       storage-provisioner                        kube-system
	cf51374ee4e78       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   449ae6dc50fde       coredns-66bc5c9577-hqkgq                   kube-system
	8cfee21867a88       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   53d68908732bc       kindnet-rvljv                              kube-system
	28e97317d945c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   107b2ab53cae5       kube-proxy-glkj6                           kube-system
	26b745984d39c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   d3aece2f216d9       kube-scheduler-addons-067378               kube-system
	f91e161872e50       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   f2b21a23ed9a9       etcd-addons-067378                         kube-system
	4d452e796395f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   405fb14328cec       kube-apiserver-addons-067378               kube-system
	b06978953fd6c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   9f99f54465f46       kube-controller-manager-addons-067378      kube-system
	
	
	==> coredns [cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c] <==
	[INFO] 10.244.0.14:46231 - 15787 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001997467s
	[INFO] 10.244.0.14:46231 - 56632 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000118532s
	[INFO] 10.244.0.14:46231 - 47175 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000076169s
	[INFO] 10.244.0.14:59768 - 55084 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014968s
	[INFO] 10.244.0.14:59768 - 54855 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000249233s
	[INFO] 10.244.0.14:36102 - 5607 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119443s
	[INFO] 10.244.0.14:36102 - 5171 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000105232s
	[INFO] 10.244.0.14:47236 - 50138 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000279453s
	[INFO] 10.244.0.14:47236 - 49702 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000144781s
	[INFO] 10.244.0.14:41715 - 33677 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001372731s
	[INFO] 10.244.0.14:41715 - 33496 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001496252s
	[INFO] 10.244.0.14:39111 - 1989 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000109269s
	[INFO] 10.244.0.14:39111 - 1560 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188637s
	[INFO] 10.244.0.21:52495 - 41689 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000259432s
	[INFO] 10.244.0.21:59231 - 46361 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000289471s
	[INFO] 10.244.0.21:49736 - 56506 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000270099s
	[INFO] 10.244.0.21:32826 - 34400 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000285763s
	[INFO] 10.244.0.21:51370 - 14220 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120305s
	[INFO] 10.244.0.21:51278 - 63662 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000288536s
	[INFO] 10.244.0.21:45331 - 33996 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002335128s
	[INFO] 10.244.0.21:37531 - 4837 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001878894s
	[INFO] 10.244.0.21:51175 - 37384 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001771562s
	[INFO] 10.244.0.21:43833 - 55114 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000916817s
	[INFO] 10.244.0.23:57342 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000218735s
	[INFO] 10.244.0.23:59773 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000199263s
	
	
	==> describe nodes <==
	Name:               addons-067378
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-067378
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-067378
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_42_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-067378
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-067378"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:42:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-067378
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 06:47:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 06:47:23 +0000   Thu, 02 Oct 2025 06:42:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 06:47:23 +0000   Thu, 02 Oct 2025 06:42:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 06:47:23 +0000   Thu, 02 Oct 2025 06:42:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 06:47:23 +0000   Thu, 02 Oct 2025 06:43:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-067378
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c49bef4fc834808b914a36b06dbf372
	  System UUID:                2f1814d6-1357-446a-b78d-d0dacf031115
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-85f6b7fc65-nt86x     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  default                     hello-world-app-5d498dc89-whgpg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-bvpt5                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  gcp-auth                    gcp-auth-78565c9fb4-sf7d5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-jv8pp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m50s
	  kube-system                 coredns-66bc5c9577-hqkgq                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m56s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-g5rfp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 etcd-addons-067378                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m1s
	  kube-system                 kindnet-rvljv                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m57s
	  kube-system                 kube-apiserver-addons-067378                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-controller-manager-addons-067378       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-glkj6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-scheduler-addons-067378                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 metrics-server-85b7d694d7-6x654             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m51s
	  kube-system                 nvidia-device-plugin-daemonset-kjxmr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 registry-66898fdd98-w2szx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-creds-764b6fb674-j77fn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 registry-proxy-zrq82                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 snapshot-controller-7d9fbc56b8-57t4l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-vvfqw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  local-path-storage          local-path-provisioner-648f6765c9-mrnqw     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-x6zz2              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m55s                kube-proxy       
	  Normal   Starting                 5m9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node addons-067378 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node addons-067378 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m8s (x8 over 5m8s)  kubelet          Node addons-067378 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m1s                 kubelet          Node addons-067378 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m1s                 kubelet          Node addons-067378 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m1s                 kubelet          Node addons-067378 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m58s                node-controller  Node addons-067378 event: Registered Node addons-067378 in Controller
	  Normal   NodeReady                4m15s                kubelet          Node addons-067378 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d] <==
	{"level":"warn","ts":"2025-10-02T06:42:22.843454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.853201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.871004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.893250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.904913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.921215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.938286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.958987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.977215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.989581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.006283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.032945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.055873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.070721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.096383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.123999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.140892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.158898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.241398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:39.215408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:39.231745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:43:00.831153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:43:00.846072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:43:00.951736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:43:00.967064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38046","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [03a89e8a85aa8b6afaf5fac71d171429d214ab40fa4e857d0f32ec4ed024d9dd] <==
	2025/10/02 06:44:21 GCP Auth Webhook started!
	2025/10/02 06:44:28 Ready to marshal response ...
	2025/10/02 06:44:28 Ready to write response ...
	2025/10/02 06:44:28 Ready to marshal response ...
	2025/10/02 06:44:28 Ready to write response ...
	2025/10/02 06:44:28 Ready to marshal response ...
	2025/10/02 06:44:28 Ready to write response ...
	2025/10/02 06:44:48 Ready to marshal response ...
	2025/10/02 06:44:48 Ready to write response ...
	2025/10/02 06:44:54 Ready to marshal response ...
	2025/10/02 06:44:54 Ready to write response ...
	2025/10/02 06:45:06 Ready to marshal response ...
	2025/10/02 06:45:06 Ready to write response ...
	2025/10/02 06:45:27 Ready to marshal response ...
	2025/10/02 06:45:27 Ready to write response ...
	2025/10/02 06:45:49 Ready to marshal response ...
	2025/10/02 06:45:49 Ready to write response ...
	2025/10/02 06:45:49 Ready to marshal response ...
	2025/10/02 06:45:49 Ready to write response ...
	2025/10/02 06:45:57 Ready to marshal response ...
	2025/10/02 06:45:57 Ready to write response ...
	2025/10/02 06:47:26 Ready to marshal response ...
	2025/10/02 06:47:26 Ready to write response ...
	
	
	==> kernel <==
	 06:47:29 up  1:29,  0 user,  load average: 0.94, 2.14, 2.91
	Linux addons-067378 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c] <==
	I1002 06:45:23.117307       1 main.go:301] handling current node
	I1002 06:45:33.111677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:45:33.112034       1 main.go:301] handling current node
	I1002 06:45:43.108092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:45:43.108235       1 main.go:301] handling current node
	I1002 06:45:53.108890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:45:53.108927       1 main.go:301] handling current node
	I1002 06:46:03.110523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:46:03.110567       1 main.go:301] handling current node
	I1002 06:46:13.108597       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:46:13.108651       1 main.go:301] handling current node
	I1002 06:46:23.113673       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:46:23.113713       1 main.go:301] handling current node
	I1002 06:46:33.117283       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:46:33.117401       1 main.go:301] handling current node
	I1002 06:46:43.108073       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:46:43.108111       1 main.go:301] handling current node
	I1002 06:46:53.116508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:46:53.116549       1 main.go:301] handling current node
	I1002 06:47:03.108089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:47:03.108147       1 main.go:301] handling current node
	I1002 06:47:13.115610       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:47:13.115664       1 main.go:301] handling current node
	I1002 06:47:23.116295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:47:23.116329       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 06:44:05.469678       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.30.166:443: connect: connection refused" logger="UnhandledError"
	E1002 06:44:05.471391       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.30.166:443: connect: connection refused" logger="UnhandledError"
	W1002 06:44:06.469327       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:44:06.469441       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 06:44:06.469463       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 06:44:06.469542       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:44:06.469566       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 06:44:06.470641       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1002 06:44:10.484077       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1002 06:44:10.484739       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:44:10.484791       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 06:44:10.532063       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 06:44:38.399827       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57258: use of closed network connection
	I1002 06:45:06.199954       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 06:45:06.457923       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1002 06:45:06.615291       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.41.164"}
	I1002 06:47:27.008803       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.176.250"}
	
	
	==> kube-controller-manager [b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c] <==
	I1002 06:42:30.836266       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 06:42:30.836598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 06:42:30.836790       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 06:42:30.837881       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 06:42:30.863608       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-067378" podCIDRs=["10.244.0.0/24"]
	I1002 06:42:30.863757       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 06:42:30.864335       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 06:42:30.911177       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 06:42:30.911268       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 06:42:30.911299       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 06:42:30.933073       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 06:42:37.034469       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 06:42:37.065709       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 06:43:00.824509       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:43:00.824660       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 06:43:00.824717       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 06:43:00.925284       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:43:00.940570       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 06:43:00.944709       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 06:43:01.045355       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 06:43:15.833417       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 06:43:30.931311       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:43:31.055416       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1002 06:44:00.935344       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:44:01.068524       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f] <==
	I1002 06:42:32.886883       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:42:33.084445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:42:33.184839       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:42:33.184870       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:42:33.184944       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:42:33.260355       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:42:33.260406       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:42:33.280788       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:42:33.281079       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:42:33.281093       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:42:33.286416       1 config.go:200] "Starting service config controller"
	I1002 06:42:33.298085       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:42:33.298104       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:42:33.294623       1 config.go:309] "Starting node config controller"
	I1002 06:42:33.298137       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:42:33.298148       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:42:33.294266       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:42:33.298155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:42:33.298160       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 06:42:33.294278       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:42:33.298205       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:42:33.298210       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348] <==
	E1002 06:42:24.091272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:42:24.091360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:42:24.091435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:42:24.095616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:42:24.095771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:42:24.095863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:42:24.095927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:42:24.096009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:42:24.096092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:42:24.096162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:42:24.096221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:42:24.096351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 06:42:24.096382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:42:24.910949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:42:24.975288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:42:24.978504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:42:25.019759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:42:25.035680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:42:25.054089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:42:25.188935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:42:25.200696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:42:25.240537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:42:25.267139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:42:25.444441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 06:42:28.074951       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 06:46:00 addons-067378 kubelet[1286]: I1002 06:46:00.802821    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e879a0-1f76-4946-a09e-aae2aaf98b37" path="/var/lib/kubelet/pods/45e879a0-1f76-4946-a09e-aae2aaf98b37/volumes"
	Oct 02 06:46:04 addons-067378 kubelet[1286]: I1002 06:46:04.801329    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kjxmr" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:46:26 addons-067378 kubelet[1286]: I1002 06:46:26.887208    1286 scope.go:117] "RemoveContainer" containerID="a5531b6a0ef60965a2120747acc7bdccfd58d99a9c4c65af7652600185f1907c"
	Oct 02 06:46:26 addons-067378 kubelet[1286]: I1002 06:46:26.896622    1286 scope.go:117] "RemoveContainer" containerID="4aa5f862394e869a55b4d109959d0489d37b45a22b32e966f79b45e7d257a9ec"
	Oct 02 06:46:26 addons-067378 kubelet[1286]: E1002 06:46:26.940962    1286 manager.go:1116] Failed to create existing container: /crio-50ad985514d874f32a72f53d6b3f7ac58089a40ca86e2ce5ee617a24fe4f57ee: Error finding container 50ad985514d874f32a72f53d6b3f7ac58089a40ca86e2ce5ee617a24fe4f57ee: Status 404 returned error can't find the container with id 50ad985514d874f32a72f53d6b3f7ac58089a40ca86e2ce5ee617a24fe4f57ee
	Oct 02 06:46:26 addons-067378 kubelet[1286]: E1002 06:46:26.941264    1286 manager.go:1116] Failed to create existing container: /crio-32ec479c8611de26b7e70efaa7c91d0136ef5051d9e6825b7242c31cab4c8a78: Error finding container 32ec479c8611de26b7e70efaa7c91d0136ef5051d9e6825b7242c31cab4c8a78: Status 404 returned error can't find the container with id 32ec479c8611de26b7e70efaa7c91d0136ef5051d9e6825b7242c31cab4c8a78
	Oct 02 06:46:26 addons-067378 kubelet[1286]: E1002 06:46:26.943450    1286 manager.go:1116] Failed to create existing container: /docker/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/crio-50ad985514d874f32a72f53d6b3f7ac58089a40ca86e2ce5ee617a24fe4f57ee: Error finding container 50ad985514d874f32a72f53d6b3f7ac58089a40ca86e2ce5ee617a24fe4f57ee: Status 404 returned error can't find the container with id 50ad985514d874f32a72f53d6b3f7ac58089a40ca86e2ce5ee617a24fe4f57ee
	Oct 02 06:46:26 addons-067378 kubelet[1286]: E1002 06:46:26.944524    1286 manager.go:1116] Failed to create existing container: /docker/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/crio-32ec479c8611de26b7e70efaa7c91d0136ef5051d9e6825b7242c31cab4c8a78: Error finding container 32ec479c8611de26b7e70efaa7c91d0136ef5051d9e6825b7242c31cab4c8a78: Status 404 returned error can't find the container with id 32ec479c8611de26b7e70efaa7c91d0136ef5051d9e6825b7242c31cab4c8a78
	Oct 02 06:46:36 addons-067378 kubelet[1286]: I1002 06:46:36.801207    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-w2szx" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:46:43 addons-067378 kubelet[1286]: I1002 06:46:43.800372    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zrq82" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:47:23 addons-067378 kubelet[1286]: I1002 06:47:23.500737    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-j77fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:47:23 addons-067378 kubelet[1286]: W1002 06:47:23.529069    1286 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/crio-58e49be6990ebec2c5db4b88bdac22bafb36367459a18a5ef28268acb9dbac97 WatchSource:0}: Error finding container 58e49be6990ebec2c5db4b88bdac22bafb36367459a18a5ef28268acb9dbac97: Status 404 returned error can't find the container with id 58e49be6990ebec2c5db4b88bdac22bafb36367459a18a5ef28268acb9dbac97
	Oct 02 06:47:25 addons-067378 kubelet[1286]: I1002 06:47:25.392001    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-j77fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:47:25 addons-067378 kubelet[1286]: I1002 06:47:25.392060    1286 scope.go:117] "RemoveContainer" containerID="df56b7e6ce9f017c37463c379216ef0f2cf0989a162ae20d6e8e0ad193478fc3"
	Oct 02 06:47:26 addons-067378 kubelet[1286]: I1002 06:47:26.397200    1286 scope.go:117] "RemoveContainer" containerID="df56b7e6ce9f017c37463c379216ef0f2cf0989a162ae20d6e8e0ad193478fc3"
	Oct 02 06:47:26 addons-067378 kubelet[1286]: I1002 06:47:26.398100    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-j77fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:47:26 addons-067378 kubelet[1286]: I1002 06:47:26.398450    1286 scope.go:117] "RemoveContainer" containerID="5f5aa695c2040fe9be5d2f9ec947a26cdd5363a34c2bd46de8e853a70011ef70"
	Oct 02 06:47:26 addons-067378 kubelet[1286]: E1002 06:47:26.398716    1286 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-j77fn_kube-system(62c7e651-a525-434a-b3a2-67917ea0034f)\"" pod="kube-system/registry-creds-764b6fb674-j77fn" podUID="62c7e651-a525-434a-b3a2-67917ea0034f"
	Oct 02 06:47:26 addons-067378 kubelet[1286]: E1002 06:47:26.942180    1286 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2a0417fbd8949d39ac00d0683d2f6d90b4881a39cc2ec5a836deef8289127b81/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2a0417fbd8949d39ac00d0683d2f6d90b4881a39cc2ec5a836deef8289127b81/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 06:47:26 addons-067378 kubelet[1286]: I1002 06:47:26.957145    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77nw8\" (UniqueName: \"kubernetes.io/projected/03cc7a91-d490-4776-a388-07e17f194a23-kube-api-access-77nw8\") pod \"hello-world-app-5d498dc89-whgpg\" (UID: \"03cc7a91-d490-4776-a388-07e17f194a23\") " pod="default/hello-world-app-5d498dc89-whgpg"
	Oct 02 06:47:26 addons-067378 kubelet[1286]: I1002 06:47:26.957188    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/03cc7a91-d490-4776-a388-07e17f194a23-gcp-creds\") pod \"hello-world-app-5d498dc89-whgpg\" (UID: \"03cc7a91-d490-4776-a388-07e17f194a23\") " pod="default/hello-world-app-5d498dc89-whgpg"
	Oct 02 06:47:27 addons-067378 kubelet[1286]: W1002 06:47:27.193529    1286 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/crio-878f13cd49af91e07b74fa669da9a32e4e6f8fc7389aa08eb36e821fc98aab2b WatchSource:0}: Error finding container 878f13cd49af91e07b74fa669da9a32e4e6f8fc7389aa08eb36e821fc98aab2b: Status 404 returned error can't find the container with id 878f13cd49af91e07b74fa669da9a32e4e6f8fc7389aa08eb36e821fc98aab2b
	Oct 02 06:47:27 addons-067378 kubelet[1286]: I1002 06:47:27.413923    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-j77fn" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:47:27 addons-067378 kubelet[1286]: I1002 06:47:27.413979    1286 scope.go:117] "RemoveContainer" containerID="5f5aa695c2040fe9be5d2f9ec947a26cdd5363a34c2bd46de8e853a70011ef70"
	Oct 02 06:47:27 addons-067378 kubelet[1286]: E1002 06:47:27.414132    1286 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-j77fn_kube-system(62c7e651-a525-434a-b3a2-67917ea0034f)\"" pod="kube-system/registry-creds-764b6fb674-j77fn" podUID="62c7e651-a525-434a-b3a2-67917ea0034f"
	
	
	==> storage-provisioner [23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237] <==
	W1002 06:47:03.509073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:05.513453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:05.520454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:07.524073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:07.528665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:09.532136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:09.539635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:11.546367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:11.559711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:13.563303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:13.568111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:15.571903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:15.578667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:17.582601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:17.589926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:19.593700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:19.599012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:21.605063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:21.610485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:23.614422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:23.619189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:25.623163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:25.627964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:27.631593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:47:27.640082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-067378 -n addons-067378
helpers_test.go:269: (dbg) Run:  kubectl --context addons-067378 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-sp78n ingress-nginx-admission-patch-dqc9b
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-067378 describe pod ingress-nginx-admission-create-sp78n ingress-nginx-admission-patch-dqc9b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-067378 describe pod ingress-nginx-admission-create-sp78n ingress-nginx-admission-patch-dqc9b: exit status 1 (133.599045ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sp78n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dqc9b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-067378 describe pod ingress-nginx-admission-create-sp78n ingress-nginx-admission-patch-dqc9b: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (312.023367ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:47:30.243667  304627 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:47:30.244805  304627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:47:30.244861  304627 out.go:374] Setting ErrFile to fd 2...
	I1002 06:47:30.244897  304627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:47:30.245267  304627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:47:30.245729  304627 mustload.go:65] Loading cluster: addons-067378
	I1002 06:47:30.246261  304627 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:47:30.246310  304627 addons.go:606] checking whether the cluster is paused
	I1002 06:47:30.246471  304627 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:47:30.246520  304627 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:47:30.262679  304627 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:47:30.288690  304627 ssh_runner.go:195] Run: systemctl --version
	I1002 06:47:30.288757  304627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:47:30.309848  304627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:47:30.409785  304627 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:47:30.409888  304627 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:47:30.447976  304627 cri.go:89] found id: "5f5aa695c2040fe9be5d2f9ec947a26cdd5363a34c2bd46de8e853a70011ef70"
	I1002 06:47:30.447998  304627 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:47:30.448004  304627 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:47:30.448008  304627 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:47:30.448011  304627 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:47:30.448014  304627 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:47:30.448018  304627 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:47:30.448021  304627 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:47:30.448032  304627 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:47:30.448039  304627 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:47:30.448042  304627 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:47:30.448046  304627 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:47:30.448050  304627 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:47:30.448054  304627 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:47:30.448062  304627 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:47:30.448077  304627 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:47:30.448085  304627 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:47:30.448090  304627 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:47:30.448093  304627 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:47:30.448096  304627 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:47:30.448101  304627 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:47:30.448104  304627 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:47:30.448107  304627 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:47:30.448110  304627 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:47:30.448114  304627 cri.go:89] found id: ""
	I1002 06:47:30.448165  304627 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:47:30.465970  304627 out.go:203] 
	W1002 06:47:30.468903  304627 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:47:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:47:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:47:30.468932  304627 out.go:285] * 
	* 
	W1002 06:47:30.473907  304627 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:47:30.476918  304627 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable ingress --alsologtostderr -v=1: exit status 11 (254.37713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:47:30.535539  304741 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:47:30.536476  304741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:47:30.536492  304741 out.go:374] Setting ErrFile to fd 2...
	I1002 06:47:30.536497  304741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:47:30.536827  304741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:47:30.537171  304741 mustload.go:65] Loading cluster: addons-067378
	I1002 06:47:30.537620  304741 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:47:30.537641  304741 addons.go:606] checking whether the cluster is paused
	I1002 06:47:30.537783  304741 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:47:30.537806  304741 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:47:30.538338  304741 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:47:30.558542  304741 ssh_runner.go:195] Run: systemctl --version
	I1002 06:47:30.558601  304741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:47:30.579920  304741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:47:30.673882  304741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:47:30.673993  304741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:47:30.705263  304741 cri.go:89] found id: "5f5aa695c2040fe9be5d2f9ec947a26cdd5363a34c2bd46de8e853a70011ef70"
	I1002 06:47:30.705295  304741 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:47:30.705301  304741 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:47:30.705304  304741 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:47:30.705308  304741 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:47:30.705311  304741 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:47:30.705314  304741 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:47:30.705317  304741 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:47:30.705320  304741 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:47:30.705344  304741 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:47:30.705348  304741 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:47:30.705352  304741 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:47:30.705355  304741 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:47:30.705359  304741 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:47:30.705362  304741 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:47:30.705367  304741 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:47:30.705370  304741 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:47:30.705374  304741 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:47:30.705378  304741 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:47:30.705381  304741 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:47:30.705386  304741 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:47:30.705389  304741 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:47:30.705392  304741 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:47:30.705396  304741 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:47:30.705401  304741 cri.go:89] found id: ""
	I1002 06:47:30.705460  304741 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:47:30.720825  304741 out.go:203] 
	W1002 06:47:30.723742  304741 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:47:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:47:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:47:30.723770  304741 out.go:285] * 
	* 
	W1002 06:47:30.728740  304741 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:47:30.731587  304741 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.85s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bvpt5" [dcc6a172-f895-44c0-80f7-e8b194d333bf] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002777126s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (250.127909ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:45:05.687737  302202 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:45:05.688484  302202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:05.688498  302202 out.go:374] Setting ErrFile to fd 2...
	I1002 06:45:05.688505  302202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:05.688754  302202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:45:05.689044  302202 mustload.go:65] Loading cluster: addons-067378
	I1002 06:45:05.689420  302202 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:05.689437  302202 addons.go:606] checking whether the cluster is paused
	I1002 06:45:05.689538  302202 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:05.689560  302202 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:45:05.689996  302202 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:45:05.707665  302202 ssh_runner.go:195] Run: systemctl --version
	I1002 06:45:05.707733  302202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:45:05.724406  302202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:45:05.829793  302202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:45:05.829885  302202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:45:05.859879  302202 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:45:05.859911  302202 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:45:05.859917  302202 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:45:05.859920  302202 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:45:05.859924  302202 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:45:05.859929  302202 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:45:05.859932  302202 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:45:05.859936  302202 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:45:05.859939  302202 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:45:05.859951  302202 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:45:05.859957  302202 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:45:05.859961  302202 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:45:05.859964  302202 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:45:05.859971  302202 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:45:05.859975  302202 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:45:05.859980  302202 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:45:05.859986  302202 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:45:05.859991  302202 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:45:05.859994  302202 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:45:05.860001  302202 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:45:05.860006  302202 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:45:05.860010  302202 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:45:05.860012  302202 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:45:05.860019  302202 cri.go:89] found id: ""
	I1002 06:45:05.860073  302202 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:45:05.874919  302202 out.go:203] 
	W1002 06:45:05.877810  302202 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:45:05.877838  302202 out.go:285] * 
	* 
	W1002 06:45:05.882904  302202 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:45:05.885712  302202 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.662897ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6x654" [0118f095-2060-4680-b4c9-c2c78976dda1] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004753441s
addons_test.go:463: (dbg) Run:  kubectl --context addons-067378 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (297.278375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:44:59.427815  302109 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:44:59.428520  302109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:59.428532  302109 out.go:374] Setting ErrFile to fd 2...
	I1002 06:44:59.428538  302109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:59.428798  302109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:44:59.429095  302109 mustload.go:65] Loading cluster: addons-067378
	I1002 06:44:59.429481  302109 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:59.429494  302109 addons.go:606] checking whether the cluster is paused
	I1002 06:44:59.429600  302109 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:59.429617  302109 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:44:59.430076  302109 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:44:59.449450  302109 ssh_runner.go:195] Run: systemctl --version
	I1002 06:44:59.449501  302109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:44:59.470292  302109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:44:59.565894  302109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:44:59.565997  302109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:44:59.602325  302109 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:44:59.602345  302109 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:44:59.602350  302109 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:44:59.602354  302109 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:44:59.602357  302109 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:44:59.602361  302109 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:44:59.602365  302109 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:44:59.602368  302109 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:44:59.602371  302109 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:44:59.602377  302109 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:44:59.602381  302109 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:44:59.602384  302109 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:44:59.602387  302109 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:44:59.602390  302109 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:44:59.602393  302109 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:44:59.602400  302109 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:44:59.602404  302109 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:44:59.602408  302109 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:44:59.602411  302109 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:44:59.602414  302109 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:44:59.602419  302109 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:44:59.602422  302109 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:44:59.602425  302109 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:44:59.602428  302109 cri.go:89] found id: ""
	I1002 06:44:59.602479  302109 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:44:59.617916  302109 out.go:203] 
	W1002 06:44:59.621062  302109 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:44:59.621088  302109 out.go:285] * 
	* 
	W1002 06:44:59.625974  302109 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:44:59.629361  302109 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 06:44:41.807951  294357 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 06:44:41.813629  294357 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 06:44:41.814265  294357 kapi.go:107] duration metric: took 6.328234ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.630292ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-067378 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-067378 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0596ba6f-77bf-49cb-93e4-8a064199ec2a] Pending
helpers_test.go:352: "task-pv-pod" [0596ba6f-77bf-49cb-93e4-8a064199ec2a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [0596ba6f-77bf-49cb-93e4-8a064199ec2a] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005092495s
addons_test.go:572: (dbg) Run:  kubectl --context addons-067378 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-067378 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-067378 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-067378 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-067378 delete pod task-pv-pod: (1.22077428s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-067378 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-067378 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-067378 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [108a314a-c0b2-4b80-a22b-98a97fda2a8e] Pending
helpers_test.go:352: "task-pv-pod-restore" [108a314a-c0b2-4b80-a22b-98a97fda2a8e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [108a314a-c0b2-4b80-a22b-98a97fda2a8e] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003850112s
addons_test.go:614: (dbg) Run:  kubectl --context addons-067378 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-067378 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-067378 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (292.724294ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:45:36.380226  302987 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:45:36.381035  302987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:36.381067  302987 out.go:374] Setting ErrFile to fd 2...
	I1002 06:45:36.381087  302987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:36.381780  302987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:45:36.382130  302987 mustload.go:65] Loading cluster: addons-067378
	I1002 06:45:36.382529  302987 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:36.382550  302987 addons.go:606] checking whether the cluster is paused
	I1002 06:45:36.382694  302987 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:36.382719  302987 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:45:36.383283  302987 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:45:36.400501  302987 ssh_runner.go:195] Run: systemctl --version
	I1002 06:45:36.400563  302987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:45:36.419329  302987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:45:36.519886  302987 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:45:36.519967  302987 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:45:36.581345  302987 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:45:36.581433  302987 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:45:36.581458  302987 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:45:36.581483  302987 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:45:36.581519  302987 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:45:36.581539  302987 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:45:36.581565  302987 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:45:36.581591  302987 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:45:36.581624  302987 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:45:36.581656  302987 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:45:36.581689  302987 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:45:36.581717  302987 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:45:36.581735  302987 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:45:36.581771  302987 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:45:36.581791  302987 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:45:36.581816  302987 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:45:36.581872  302987 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:45:36.581893  302987 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:45:36.581915  302987 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:45:36.581957  302987 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:45:36.581990  302987 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:45:36.582040  302987 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:45:36.582059  302987 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:45:36.582088  302987 cri.go:89] found id: ""
	I1002 06:45:36.582183  302987 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:45:36.603440  302987 out.go:203] 
	W1002 06:45:36.606350  302987 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:45:36.606440  302987 out.go:285] * 
	* 
	W1002 06:45:36.611619  302987 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:45:36.614552  302987 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (254.547565ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:45:36.675417  303039 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:45:36.676172  303039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:36.676188  303039 out.go:374] Setting ErrFile to fd 2...
	I1002 06:45:36.676194  303039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:36.676473  303039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:45:36.676819  303039 mustload.go:65] Loading cluster: addons-067378
	I1002 06:45:36.677208  303039 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:36.677226  303039 addons.go:606] checking whether the cluster is paused
	I1002 06:45:36.677345  303039 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:36.677366  303039 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:45:36.677811  303039 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:45:36.695894  303039 ssh_runner.go:195] Run: systemctl --version
	I1002 06:45:36.695965  303039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:45:36.713585  303039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:45:36.809732  303039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:45:36.809811  303039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:45:36.842028  303039 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:45:36.842050  303039 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:45:36.842055  303039 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:45:36.842059  303039 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:45:36.842063  303039 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:45:36.842068  303039 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:45:36.842084  303039 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:45:36.842088  303039 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:45:36.842092  303039 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:45:36.842098  303039 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:45:36.842107  303039 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:45:36.842111  303039 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:45:36.842115  303039 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:45:36.842119  303039 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:45:36.842126  303039 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:45:36.842131  303039 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:45:36.842135  303039 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:45:36.842139  303039 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:45:36.842142  303039 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:45:36.842145  303039 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:45:36.842150  303039 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:45:36.842157  303039 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:45:36.842161  303039 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:45:36.842163  303039 cri.go:89] found id: ""
	I1002 06:45:36.842215  303039 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:45:36.859219  303039 out.go:203] 
	W1002 06:45:36.862289  303039 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:45:36.862324  303039 out.go:285] * 
	* 
	W1002 06:45:36.868546  303039 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:45:36.871648  303039 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (55.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-067378 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-067378 --alsologtostderr -v=1: exit status 11 (256.282781ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:44:38.707620  301185 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:44:38.708399  301185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:38.708454  301185 out.go:374] Setting ErrFile to fd 2...
	I1002 06:44:38.708475  301185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:38.708822  301185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:44:38.709248  301185 mustload.go:65] Loading cluster: addons-067378
	I1002 06:44:38.709698  301185 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:38.709738  301185 addons.go:606] checking whether the cluster is paused
	I1002 06:44:38.709877  301185 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:38.709915  301185 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:44:38.710413  301185 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:44:38.734163  301185 ssh_runner.go:195] Run: systemctl --version
	I1002 06:44:38.734214  301185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:44:38.752409  301185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:44:38.845305  301185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:44:38.845406  301185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:44:38.876811  301185 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:44:38.876851  301185 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:44:38.876857  301185 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:44:38.876861  301185 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:44:38.876864  301185 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:44:38.876869  301185 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:44:38.876872  301185 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:44:38.876875  301185 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:44:38.876879  301185 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:44:38.876897  301185 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:44:38.876905  301185 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:44:38.876908  301185 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:44:38.876911  301185 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:44:38.876921  301185 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:44:38.876929  301185 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:44:38.876939  301185 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:44:38.876945  301185 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:44:38.876950  301185 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:44:38.876953  301185 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:44:38.876957  301185 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:44:38.876961  301185 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:44:38.876964  301185 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:44:38.876967  301185 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:44:38.876969  301185 cri.go:89] found id: ""
	I1002 06:44:38.877036  301185 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:44:38.891544  301185 out.go:203] 
	W1002 06:44:38.894427  301185 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:44:38.894462  301185 out.go:285] * 
	* 
	W1002 06:44:38.899662  301185 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:44:38.902416  301185 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-067378 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-067378
helpers_test.go:243: (dbg) docker inspect addons-067378:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743",
	        "Created": "2025-10-02T06:42:00.285266979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295519,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:42:00.437236977Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/hostname",
	        "HostsPath": "/var/lib/docker/containers/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/hosts",
	        "LogPath": "/var/lib/docker/containers/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743-json.log",
	        "Name": "/addons-067378",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-067378:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-067378",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743",
	                "LowerDir": "/var/lib/docker/overlay2/27be614f558d0a8c3c52c831d477e8c5c9e368d506c2a9434a912568103adf6f-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/27be614f558d0a8c3c52c831d477e8c5c9e368d506c2a9434a912568103adf6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/27be614f558d0a8c3c52c831d477e8c5c9e368d506c2a9434a912568103adf6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/27be614f558d0a8c3c52c831d477e8c5c9e368d506c2a9434a912568103adf6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-067378",
	                "Source": "/var/lib/docker/volumes/addons-067378/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-067378",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-067378",
	                "name.minikube.sigs.k8s.io": "addons-067378",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e183065c5c1950ede2433c49d3f8899bad7fc9dd4dcfd4ca487ce9abfcd56f29",
	            "SandboxKey": "/var/run/docker/netns/e183065c5c19",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-067378": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:20:fc:31:81:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de8269b06b79a3e18d05347fbb9c73f4a624138eb10bd2509355bfcb5f7a406e",
	                    "EndpointID": "fb9d1ced2a7c935d95b479062b33b33a16e640c84101c4e40ed28a1f530269cf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-067378",
	                        "be6899c5910e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-067378 -n addons-067378
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-067378 logs -n 25: (1.501752309s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-954800 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-954800   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ delete  │ -p download-only-954800                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-954800   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ start   │ -o=json --download-only -p download-only-378847 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-378847   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ delete  │ -p download-only-378847                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-378847   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ delete  │ -p download-only-954800                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-954800   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ delete  │ -p download-only-378847                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-378847   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ start   │ --download-only -p download-docker-396070 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-396070 │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ delete  │ -p download-docker-396070                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-396070 │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ start   │ --download-only -p binary-mirror-242470 --alsologtostderr --binary-mirror http://127.0.0.1:34303 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-242470   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ delete  │ -p binary-mirror-242470                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-242470   │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ addons  │ disable dashboard -p addons-067378                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ addons  │ enable dashboard -p addons-067378                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ start   │ -p addons-067378 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:44 UTC │
	│ addons  │ addons-067378 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │                     │
	│ addons  │ addons-067378 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │                     │
	│ addons  │ enable headlamp -p addons-067378 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-067378          │ jenkins │ v1.37.0 │ 02 Oct 25 06:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:41:33
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:41:33.571837  295123 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:41:33.571950  295123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:33.571961  295123 out.go:374] Setting ErrFile to fd 2...
	I1002 06:41:33.571966  295123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:33.572226  295123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:41:33.572671  295123 out.go:368] Setting JSON to false
	I1002 06:41:33.573513  295123 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5045,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 06:41:33.573582  295123 start.go:140] virtualization:  
	I1002 06:41:33.576912  295123 out.go:179] * [addons-067378] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:41:33.580601  295123 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:41:33.580662  295123 notify.go:220] Checking for updates...
	I1002 06:41:33.586457  295123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:41:33.589395  295123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 06:41:33.592355  295123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 06:41:33.595220  295123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 06:41:33.598064  295123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:41:33.601212  295123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:41:33.628509  295123 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:41:33.628639  295123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:41:33.683589  295123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:41:33.674283519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:41:33.683695  295123 docker.go:318] overlay module found
	I1002 06:41:33.688759  295123 out.go:179] * Using the docker driver based on user configuration
	I1002 06:41:33.691744  295123 start.go:304] selected driver: docker
	I1002 06:41:33.691767  295123 start.go:924] validating driver "docker" against <nil>
	I1002 06:41:33.691781  295123 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:41:33.692497  295123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:41:33.747437  295123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:41:33.737895417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:41:33.747596  295123 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:41:33.747835  295123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:41:33.750873  295123 out.go:179] * Using Docker driver with root privileges
	I1002 06:41:33.753663  295123 cni.go:84] Creating CNI manager for ""
	I1002 06:41:33.753747  295123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:41:33.753762  295123 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:41:33.753845  295123 start.go:348] cluster config:
	{Name:addons-067378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 06:41:33.756921  295123 out.go:179] * Starting "addons-067378" primary control-plane node in "addons-067378" cluster
	I1002 06:41:33.759725  295123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:41:33.762728  295123 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:41:33.765574  295123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:41:33.765638  295123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 06:41:33.765654  295123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:41:33.765668  295123 cache.go:58] Caching tarball of preloaded images
	I1002 06:41:33.765762  295123 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 06:41:33.765773  295123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 06:41:33.766114  295123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/config.json ...
	I1002 06:41:33.766148  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/config.json: {Name:mka25b4481cb88cb84ea2a131c49da153455d30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:41:33.781708  295123 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:41:33.781838  295123 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:41:33.781857  295123 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 06:41:33.781862  295123 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 06:41:33.781869  295123 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 06:41:33.781875  295123 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 06:41:52.074232  295123 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 06:41:52.074286  295123 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:41:52.074317  295123 start.go:360] acquireMachinesLock for addons-067378: {Name:mk901da383b3ee543c55d3fb99cc36a665e7de29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:41:52.074441  295123 start.go:364] duration metric: took 98.355µs to acquireMachinesLock for "addons-067378"
	I1002 06:41:52.074473  295123 start.go:93] Provisioning new machine with config: &{Name:addons-067378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:41:52.074589  295123 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:41:52.078151  295123 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 06:41:52.078398  295123 start.go:159] libmachine.API.Create for "addons-067378" (driver="docker")
	I1002 06:41:52.078466  295123 client.go:168] LocalClient.Create starting
	I1002 06:41:52.078615  295123 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 06:41:52.870837  295123 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 06:41:53.195908  295123 cli_runner.go:164] Run: docker network inspect addons-067378 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:41:53.212894  295123 cli_runner.go:211] docker network inspect addons-067378 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:41:53.212975  295123 network_create.go:284] running [docker network inspect addons-067378] to gather additional debugging logs...
	I1002 06:41:53.213000  295123 cli_runner.go:164] Run: docker network inspect addons-067378
	W1002 06:41:53.229722  295123 cli_runner.go:211] docker network inspect addons-067378 returned with exit code 1
	I1002 06:41:53.229749  295123 network_create.go:287] error running [docker network inspect addons-067378]: docker network inspect addons-067378: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-067378 not found
	I1002 06:41:53.229774  295123 network_create.go:289] output of [docker network inspect addons-067378]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-067378 not found
	
	** /stderr **
	I1002 06:41:53.229899  295123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:41:53.245675  295123 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001940c70}
	I1002 06:41:53.245712  295123 network_create.go:124] attempt to create docker network addons-067378 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:41:53.245774  295123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-067378 addons-067378
	I1002 06:41:53.300325  295123 network_create.go:108] docker network addons-067378 192.168.49.0/24 created
	I1002 06:41:53.300358  295123 kic.go:121] calculated static IP "192.168.49.2" for the "addons-067378" container
	I1002 06:41:53.300453  295123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:41:53.316027  295123 cli_runner.go:164] Run: docker volume create addons-067378 --label name.minikube.sigs.k8s.io=addons-067378 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:41:53.333327  295123 oci.go:103] Successfully created a docker volume addons-067378
	I1002 06:41:53.333428  295123 cli_runner.go:164] Run: docker run --rm --name addons-067378-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-067378 --entrypoint /usr/bin/test -v addons-067378:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:41:55.605411  295123 cli_runner.go:217] Completed: docker run --rm --name addons-067378-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-067378 --entrypoint /usr/bin/test -v addons-067378:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.271942847s)
	I1002 06:41:55.605440  295123 oci.go:107] Successfully prepared a docker volume addons-067378
	I1002 06:41:55.605478  295123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:41:55.605500  295123 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:41:55.605563  295123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-067378:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:42:00.056251  295123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-067378:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.45062096s)
	I1002 06:42:00.056292  295123 kic.go:203] duration metric: took 4.45078723s to extract preloaded images to volume ...
	W1002 06:42:00.056471  295123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 06:42:00.056594  295123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:42:00.259211  295123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-067378 --name addons-067378 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-067378 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-067378 --network addons-067378 --ip 192.168.49.2 --volume addons-067378:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:42:00.654089  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Running}}
	I1002 06:42:00.671961  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:00.699021  295123 cli_runner.go:164] Run: docker exec addons-067378 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:42:00.752980  295123 oci.go:144] the created container "addons-067378" has a running status.
	I1002 06:42:00.753007  295123 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa...
	I1002 06:42:01.375233  295123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:42:01.396387  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:01.413611  295123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:42:01.413638  295123 kic_runner.go:114] Args: [docker exec --privileged addons-067378 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:42:01.454893  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:01.474916  295123 machine.go:93] provisionDockerMachine start ...
	I1002 06:42:01.475041  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:01.493222  295123 main.go:141] libmachine: Using SSH client type: native
	I1002 06:42:01.493561  295123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1002 06:42:01.493579  295123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:42:01.494268  295123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 06:42:04.627034  295123 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-067378
	
	I1002 06:42:04.627060  295123 ubuntu.go:182] provisioning hostname "addons-067378"
	I1002 06:42:04.627170  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:04.645529  295123 main.go:141] libmachine: Using SSH client type: native
	I1002 06:42:04.645835  295123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1002 06:42:04.645851  295123 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-067378 && echo "addons-067378" | sudo tee /etc/hostname
	I1002 06:42:04.784853  295123 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-067378
	
	I1002 06:42:04.784935  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:04.803551  295123 main.go:141] libmachine: Using SSH client type: native
	I1002 06:42:04.803864  295123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1002 06:42:04.803888  295123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-067378' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-067378/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-067378' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:42:04.935478  295123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:42:04.935507  295123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 06:42:04.935529  295123 ubuntu.go:190] setting up certificates
	I1002 06:42:04.935539  295123 provision.go:84] configureAuth start
	I1002 06:42:04.935622  295123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-067378
	I1002 06:42:04.953563  295123 provision.go:143] copyHostCerts
	I1002 06:42:04.953651  295123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 06:42:04.953782  295123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 06:42:04.953852  295123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 06:42:04.953906  295123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.addons-067378 san=[127.0.0.1 192.168.49.2 addons-067378 localhost minikube]
	I1002 06:42:05.273181  295123 provision.go:177] copyRemoteCerts
	I1002 06:42:05.273250  295123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:42:05.273290  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.290302  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:05.387062  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 06:42:05.405258  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:42:05.424206  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 06:42:05.441651  295123 provision.go:87] duration metric: took 506.082719ms to configureAuth
	I1002 06:42:05.441677  295123 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:42:05.441863  295123 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:42:05.441979  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.459041  295123 main.go:141] libmachine: Using SSH client type: native
	I1002 06:42:05.459386  295123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1002 06:42:05.459407  295123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 06:42:05.694399  295123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 06:42:05.694422  295123 machine.go:96] duration metric: took 4.219482672s to provisionDockerMachine
	I1002 06:42:05.694431  295123 client.go:171] duration metric: took 13.615957841s to LocalClient.Create
	I1002 06:42:05.694460  295123 start.go:167] duration metric: took 13.616049534s to libmachine.API.Create "addons-067378"
	I1002 06:42:05.694467  295123 start.go:293] postStartSetup for "addons-067378" (driver="docker")
	I1002 06:42:05.694476  295123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:42:05.694544  295123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:42:05.694584  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.712617  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:05.807179  295123 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:42:05.810498  295123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:42:05.810527  295123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:42:05.810539  295123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 06:42:05.810604  295123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 06:42:05.810634  295123 start.go:296] duration metric: took 116.161636ms for postStartSetup
	I1002 06:42:05.810945  295123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-067378
	I1002 06:42:05.827048  295123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/config.json ...
	I1002 06:42:05.827384  295123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:42:05.827439  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.850102  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:05.939470  295123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:42:05.943794  295123 start.go:128] duration metric: took 13.869189388s to createHost
	I1002 06:42:05.943819  295123 start.go:83] releasing machines lock for "addons-067378", held for 13.86936361s
	I1002 06:42:05.943895  295123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-067378
	I1002 06:42:05.959875  295123 ssh_runner.go:195] Run: cat /version.json
	I1002 06:42:05.959936  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.959953  295123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:42:05.960004  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:05.977737  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:05.977982  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:06.168759  295123 ssh_runner.go:195] Run: systemctl --version
	I1002 06:42:06.175160  295123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 06:42:06.211880  295123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:42:06.216259  295123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:42:06.216329  295123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:42:06.244853  295123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 06:42:06.244880  295123 start.go:495] detecting cgroup driver to use...
	I1002 06:42:06.244912  295123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 06:42:06.244969  295123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 06:42:06.261788  295123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 06:42:06.274723  295123 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:42:06.274794  295123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:42:06.292166  295123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:42:06.311239  295123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:42:06.428042  295123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:42:06.555921  295123 docker.go:234] disabling docker service ...
	I1002 06:42:06.556070  295123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:42:06.579610  295123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:42:06.593270  295123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:42:06.713103  295123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:42:06.834792  295123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:42:06.846934  295123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:42:06.860625  295123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 06:42:06.860694  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.869516  295123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 06:42:06.869580  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.878262  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.886964  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.895974  295123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:42:06.904281  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.913053  295123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.926162  295123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 06:42:06.934872  295123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:42:06.942524  295123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:42:06.949891  295123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:42:07.055223  295123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 06:42:07.182900  295123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 06:42:07.183031  295123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 06:42:07.186882  295123 start.go:563] Will wait 60s for crictl version
	I1002 06:42:07.186993  295123 ssh_runner.go:195] Run: which crictl
	I1002 06:42:07.190613  295123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:42:07.214674  295123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 06:42:07.214815  295123 ssh_runner.go:195] Run: crio --version
	I1002 06:42:07.245355  295123 ssh_runner.go:195] Run: crio --version
	I1002 06:42:07.278557  295123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 06:42:07.281426  295123 cli_runner.go:164] Run: docker network inspect addons-067378 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:42:07.297823  295123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:42:07.301844  295123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:42:07.311575  295123 kubeadm.go:883] updating cluster {Name:addons-067378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:42:07.311688  295123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 06:42:07.311743  295123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:42:07.347297  295123 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:42:07.347322  295123 crio.go:433] Images already preloaded, skipping extraction
	I1002 06:42:07.347379  295123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:42:07.371554  295123 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 06:42:07.371577  295123 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:42:07.371585  295123 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 06:42:07.371720  295123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-067378 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:42:07.371808  295123 ssh_runner.go:195] Run: crio config
	I1002 06:42:07.427134  295123 cni.go:84] Creating CNI manager for ""
	I1002 06:42:07.427164  295123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:42:07.427181  295123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:42:07.427206  295123 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-067378 NodeName:addons-067378 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:42:07.427365  295123 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-067378"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:42:07.427443  295123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:42:07.435466  295123 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:42:07.435567  295123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:42:07.443580  295123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 06:42:07.456907  295123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:42:07.469855  295123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1002 06:42:07.482966  295123 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:42:07.486573  295123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:42:07.496542  295123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:42:07.603205  295123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:42:07.619768  295123 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378 for IP: 192.168.49.2
	I1002 06:42:07.619833  295123 certs.go:195] generating shared ca certs ...
	I1002 06:42:07.619877  295123 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:07.620048  295123 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 06:42:08.245253  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt ...
	I1002 06:42:08.245289  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt: {Name:mk8f52b922b701ca88ac15b4067ef5563f1025f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.246153  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key ...
	I1002 06:42:08.246172  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key: {Name:mk5b501a84195826066992c4a112a0a97eb1d5ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.246813  295123 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 06:42:08.451500  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt ...
	I1002 06:42:08.451531  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt: {Name:mkeba7e1f2385589bffb45ecff4ebd8abdca6a34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.451705  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key ...
	I1002 06:42:08.451721  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key: {Name:mkb63062080aec405421dd75f400c3122397125a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.451811  295123 certs.go:257] generating profile certs ...
	I1002 06:42:08.451876  295123 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.key
	I1002 06:42:08.451893  295123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt with IP's: []
	I1002 06:42:08.866526  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt ...
	I1002 06:42:08.866557  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: {Name:mk6fd84a6d92953c0d2c0107b9c19fa02585ab28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.866748  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.key ...
	I1002 06:42:08.866763  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.key: {Name:mk5b1ab617eb5935fcb095e6c579d7151fcfa5ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:08.866844  295123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key.0a4f8341
	I1002 06:42:08.866863  295123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt.0a4f8341 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 06:42:10.194587  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt.0a4f8341 ...
	I1002 06:42:10.194621  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt.0a4f8341: {Name:mkd13c13b1c48ac3fa0b870434d6c8910e883aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:10.195472  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key.0a4f8341 ...
	I1002 06:42:10.195493  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key.0a4f8341: {Name:mk5d43f7a4741a8639b56f306d2bf3c5e007e199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:10.195586  295123 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt.0a4f8341 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt
	I1002 06:42:10.195672  295123 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key.0a4f8341 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key
	I1002 06:42:10.195730  295123 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.key
	I1002 06:42:10.195752  295123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.crt with IP's: []
	I1002 06:42:10.526719  295123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.crt ...
	I1002 06:42:10.526752  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.crt: {Name:mk1ef9672a854b186a4c97bb8db7ff752f395991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:10.526928  295123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.key ...
	I1002 06:42:10.526942  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.key: {Name:mk6c6f9e91f3733ab2c68da2aa81326c528adf88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:10.527147  295123 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:42:10.527191  295123 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 06:42:10.527219  295123 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:42:10.527245  295123 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 06:42:10.527823  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:42:10.546234  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 06:42:10.565267  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:42:10.582319  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 06:42:10.599661  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:42:10.617354  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 06:42:10.634895  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:42:10.656941  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 06:42:10.677798  295123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:42:10.696365  295123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:42:10.709728  295123 ssh_runner.go:195] Run: openssl version
	I1002 06:42:10.716265  295123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:42:10.724656  295123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:42:10.728252  295123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:42:10.728359  295123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:42:10.769473  295123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:42:10.777650  295123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:42:10.781245  295123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:42:10.781321  295123 kubeadm.go:400] StartCluster: {Name:addons-067378 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-067378 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:42:10.781436  295123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:42:10.781516  295123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:42:10.809513  295123 cri.go:89] found id: ""
	I1002 06:42:10.809681  295123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:42:10.818224  295123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:42:10.825976  295123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:42:10.826111  295123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:42:10.833761  295123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:42:10.833783  295123 kubeadm.go:157] found existing configuration files:
	
	I1002 06:42:10.833849  295123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:42:10.841656  295123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:42:10.841746  295123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:42:10.849035  295123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:42:10.857244  295123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:42:10.857345  295123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:42:10.864793  295123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:42:10.872819  295123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:42:10.872908  295123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:42:10.880591  295123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:42:10.888649  295123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:42:10.888738  295123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:42:10.896294  295123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:42:10.937444  295123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:42:10.937731  295123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:42:10.960699  295123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:42:10.960835  295123 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 06:42:10.960907  295123 kubeadm.go:318] OS: Linux
	I1002 06:42:10.960989  295123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:42:10.961078  295123 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 06:42:10.961168  295123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:42:10.961256  295123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:42:10.961342  295123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:42:10.961419  295123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:42:10.961487  295123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:42:10.961569  295123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:42:10.961646  295123 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 06:42:11.031703  295123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:42:11.031835  295123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:42:11.031939  295123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:42:11.039877  295123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:42:11.045630  295123 out.go:252]   - Generating certificates and keys ...
	I1002 06:42:11.045809  295123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:42:11.045932  295123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:42:11.505843  295123 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:42:12.400307  295123 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:42:13.735341  295123 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:42:15.178717  295123 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:42:15.453379  295123 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:42:15.453518  295123 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-067378 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:42:16.243114  295123 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:42:16.243271  295123 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-067378 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:42:16.521737  295123 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:42:16.813622  295123 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:42:17.175400  295123 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:42:17.175648  295123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:42:17.893731  295123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:42:18.337709  295123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:42:18.450908  295123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:42:19.075090  295123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:42:19.318399  295123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:42:19.319075  295123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:42:19.321873  295123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:42:19.325194  295123 out.go:252]   - Booting up control plane ...
	I1002 06:42:19.325316  295123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:42:19.325397  295123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:42:19.325466  295123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:42:19.340234  295123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:42:19.340349  295123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:42:19.347310  295123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:42:19.347700  295123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:42:19.347750  295123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:42:19.487005  295123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:42:19.487233  295123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:42:20.000724  295123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 516.677854ms
	I1002 06:42:20.005451  295123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:42:20.007171  295123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:42:20.007647  295123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:42:20.007742  295123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:42:22.504164  295123 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.493980592s
	I1002 06:42:24.111949  295123 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.104742169s
	I1002 06:42:26.010915  295123 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002965151s
	I1002 06:42:26.031823  295123 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:42:26.049405  295123 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:42:26.067365  295123 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:42:26.067601  295123 kubeadm.go:318] [mark-control-plane] Marking the node addons-067378 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:42:26.081283  295123 kubeadm.go:318] [bootstrap-token] Using token: 6muyxj.gpbfsrhp5ca1bx8q
	I1002 06:42:26.084410  295123 out.go:252]   - Configuring RBAC rules ...
	I1002 06:42:26.084559  295123 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:42:26.089435  295123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:42:26.101950  295123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:42:26.106702  295123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:42:26.112111  295123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:42:26.116331  295123 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:42:26.423848  295123 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:42:26.857816  295123 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:42:27.418105  295123 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:42:27.419527  295123 kubeadm.go:318] 
	I1002 06:42:27.419601  295123 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:42:27.419608  295123 kubeadm.go:318] 
	I1002 06:42:27.419689  295123 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:42:27.419693  295123 kubeadm.go:318] 
	I1002 06:42:27.419719  295123 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:42:27.419781  295123 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:42:27.419838  295123 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:42:27.419844  295123 kubeadm.go:318] 
	I1002 06:42:27.419901  295123 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:42:27.419905  295123 kubeadm.go:318] 
	I1002 06:42:27.419955  295123 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:42:27.419960  295123 kubeadm.go:318] 
	I1002 06:42:27.420015  295123 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:42:27.420093  295123 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:42:27.420165  295123 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:42:27.420169  295123 kubeadm.go:318] 
	I1002 06:42:27.420258  295123 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:42:27.420352  295123 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:42:27.420358  295123 kubeadm.go:318] 
	I1002 06:42:27.420446  295123 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 6muyxj.gpbfsrhp5ca1bx8q \
	I1002 06:42:27.420554  295123 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 06:42:27.420575  295123 kubeadm.go:318] 	--control-plane 
	I1002 06:42:27.420579  295123 kubeadm.go:318] 
	I1002 06:42:27.420668  295123 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:42:27.420672  295123 kubeadm.go:318] 
	I1002 06:42:27.420758  295123 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 6muyxj.gpbfsrhp5ca1bx8q \
	I1002 06:42:27.420865  295123 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 06:42:27.423162  295123 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 06:42:27.423394  295123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 06:42:27.423514  295123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:42:27.423537  295123 cni.go:84] Creating CNI manager for ""
	I1002 06:42:27.423545  295123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:42:27.426680  295123 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 06:42:27.429557  295123 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 06:42:27.433589  295123 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 06:42:27.433623  295123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 06:42:27.447321  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 06:42:27.729441  295123 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:42:27.729636  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:27.729762  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-067378 minikube.k8s.io/updated_at=2025_10_02T06_42_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-067378 minikube.k8s.io/primary=true
	I1002 06:42:27.914369  295123 ops.go:34] apiserver oom_adj: -16
	I1002 06:42:27.914555  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:28.415608  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:28.915214  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:29.414599  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:29.914791  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:30.414792  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:30.914956  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:31.415266  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:31.914588  295123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:42:32.042597  295123 kubeadm.go:1113] duration metric: took 4.313024113s to wait for elevateKubeSystemPrivileges
	I1002 06:42:32.042632  295123 kubeadm.go:402] duration metric: took 21.261338682s to StartCluster
	I1002 06:42:32.042650  295123 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:32.043508  295123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 06:42:32.043928  295123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:42:32.044125  295123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:42:32.044156  295123 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 06:42:32.044394  295123 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:42:32.044432  295123 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:42:32.044512  295123 addons.go:69] Setting yakd=true in profile "addons-067378"
	I1002 06:42:32.044527  295123 addons.go:238] Setting addon yakd=true in "addons-067378"
	I1002 06:42:32.044548  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.045009  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.045356  295123 addons.go:69] Setting inspektor-gadget=true in profile "addons-067378"
	I1002 06:42:32.045381  295123 addons.go:238] Setting addon inspektor-gadget=true in "addons-067378"
	I1002 06:42:32.045416  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.045822  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.046178  295123 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-067378"
	I1002 06:42:32.046199  295123 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-067378"
	I1002 06:42:32.046227  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.046627  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.048388  295123 addons.go:69] Setting metrics-server=true in profile "addons-067378"
	I1002 06:42:32.051480  295123 addons.go:238] Setting addon metrics-server=true in "addons-067378"
	I1002 06:42:32.051537  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.052072  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.054883  295123 addons.go:69] Setting cloud-spanner=true in profile "addons-067378"
	I1002 06:42:32.054910  295123 addons.go:238] Setting addon cloud-spanner=true in "addons-067378"
	I1002 06:42:32.054946  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.055480  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.050996  295123 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-067378"
	I1002 06:42:32.062731  295123 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-067378"
	I1002 06:42:32.062779  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.051014  295123 addons.go:69] Setting registry=true in profile "addons-067378"
	I1002 06:42:32.063232  295123 addons.go:238] Setting addon registry=true in "addons-067378"
	I1002 06:42:32.063263  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.063658  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.066248  295123 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-067378"
	I1002 06:42:32.066410  295123 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-067378"
	I1002 06:42:32.066488  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.067253  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.051022  295123 addons.go:69] Setting registry-creds=true in profile "addons-067378"
	I1002 06:42:32.070286  295123 addons.go:238] Setting addon registry-creds=true in "addons-067378"
	I1002 06:42:32.070324  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.051028  295123 addons.go:69] Setting storage-provisioner=true in profile "addons-067378"
	I1002 06:42:32.073533  295123 addons.go:238] Setting addon storage-provisioner=true in "addons-067378"
	I1002 06:42:32.073571  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.074051  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.089726  295123 addons.go:69] Setting default-storageclass=true in profile "addons-067378"
	I1002 06:42:32.089756  295123 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-067378"
	I1002 06:42:32.090098  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.051132  295123 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-067378"
	I1002 06:42:32.098599  295123 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-067378"
	I1002 06:42:32.098954  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.111606  295123 addons.go:69] Setting gcp-auth=true in profile "addons-067378"
	I1002 06:42:32.111640  295123 mustload.go:65] Loading cluster: addons-067378
	I1002 06:42:32.111917  295123 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:42:32.112296  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.051138  295123 addons.go:69] Setting volcano=true in profile "addons-067378"
	I1002 06:42:32.115583  295123 addons.go:238] Setting addon volcano=true in "addons-067378"
	I1002 06:42:32.115642  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.116111  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.129337  295123 addons.go:69] Setting ingress=true in profile "addons-067378"
	I1002 06:42:32.129370  295123 addons.go:238] Setting addon ingress=true in "addons-067378"
	I1002 06:42:32.129420  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.129917  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.145066  295123 addons.go:69] Setting ingress-dns=true in profile "addons-067378"
	I1002 06:42:32.145106  295123 addons.go:238] Setting addon ingress-dns=true in "addons-067378"
	I1002 06:42:32.145148  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.145629  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.051161  295123 addons.go:69] Setting volumesnapshots=true in profile "addons-067378"
	I1002 06:42:32.152587  295123 addons.go:238] Setting addon volumesnapshots=true in "addons-067378"
	I1002 06:42:32.152628  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.051458  295123 out.go:179] * Verifying Kubernetes components...
	I1002 06:42:32.153211  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.217649  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.284453  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.305893  295123 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:42:32.318053  295123 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:42:32.322896  295123 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:42:32.328865  295123 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:42:32.328990  295123 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:42:32.329255  295123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:42:32.331305  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:42:32.335827  295123 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:42:32.335899  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.341150  295123 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:42:32.341425  295123 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:42:32.341446  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:42:32.341507  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.333671  295123 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:42:32.353857  295123 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:42:32.353946  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.356721  295123 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1002 06:42:32.357788  295123 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 06:42:32.358155  295123 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:42:32.358211  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:42:32.358309  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.335366  295123 addons.go:238] Setting addon default-storageclass=true in "addons-067378"
	I1002 06:42:32.335478  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.365277  295123 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:42:32.365302  295123 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:42:32.365367  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.402048  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:42:32.407283  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:42:32.411657  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:42:32.417475  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:42:32.417803  295123 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:42:32.417821  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:42:32.417882  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.418642  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.419102  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.431390  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:42:32.436140  295123 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:42:32.438495  295123 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:42:32.440149  295123 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:42:32.440307  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:42:32.441481  295123 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-067378"
	I1002 06:42:32.441525  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:32.441958  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:32.456716  295123 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:42:32.456749  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:42:32.456862  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.481402  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:42:32.486692  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:42:32.492045  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:42:32.492072  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:42:32.492146  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.509482  295123 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:42:32.509507  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:42:32.509576  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.519510  295123 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:42:32.522653  295123 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:42:32.527556  295123 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:42:32.529070  295123 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:42:32.529087  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:42:32.529155  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.535411  295123 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:42:32.535461  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:42:32.535647  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.551039  295123 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:42:32.554439  295123 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:42:32.554463  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:42:32.554529  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.557798  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.586753  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.587361  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.588408  295123 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:42:32.591561  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:42:32.591582  295123 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:42:32.591642  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.600805  295123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:42:32.611315  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.612473  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.640279  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.645381  295123 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:42:32.648981  295123 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:42:32.649002  295123 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:42:32.649073  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.663322  295123 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:42:32.666799  295123 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:42:32.666822  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:42:32.666885  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:32.715406  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.727520  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.731866  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.760873  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.762069  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.771429  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	W1002 06:42:32.780828  295123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:42:32.781092  295123 retry.go:31] will retry after 252.885938ms: ssh: handshake failed: EOF
	I1002 06:42:32.795856  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.795997  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	W1002 06:42:32.800806  295123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:42:32.800835  295123 retry.go:31] will retry after 151.501199ms: ssh: handshake failed: EOF
	W1002 06:42:32.801220  295123 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:42:32.801234  295123 retry.go:31] will retry after 337.139948ms: ssh: handshake failed: EOF
	I1002 06:42:32.802092  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:32.878209  295123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:42:33.158378  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:42:33.398660  295123 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:42:33.398724  295123 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:42:33.435775  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:42:33.435850  295123 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:42:33.442891  295123 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:42:33.442916  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:42:33.460030  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:42:33.464445  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:42:33.501703  295123 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:42:33.501734  295123 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:42:33.510484  295123 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:42:33.510512  295123 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:42:33.572556  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:42:33.592358  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:42:33.630697  295123 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:42:33.630734  295123 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:42:33.656926  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:42:33.660619  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:42:33.660653  295123 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:42:33.667640  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:42:33.685854  295123 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:42:33.685880  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:42:33.722572  295123 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:33.722597  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:42:33.730176  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:42:33.730220  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:42:33.739401  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:42:33.794269  295123 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:42:33.794303  295123 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:42:33.833330  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:42:33.838806  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:33.857981  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:42:33.858014  295123 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:42:33.868764  295123 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:42:33.868797  295123 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:42:33.887799  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:42:33.887827  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:42:33.931924  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:42:34.017288  295123 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:42:34.017333  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:42:34.020695  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:42:34.028828  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:42:34.028865  295123 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:42:34.121362  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:42:34.121405  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:42:34.169436  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:42:34.224918  295123 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:42:34.224942  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:42:34.269460  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:42:34.269503  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:42:34.361719  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:42:34.441417  295123 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:42:34.441452  295123 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:42:34.504071  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.345661127s)
	I1002 06:42:34.504128  295123 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.625765233s)
	I1002 06:42:34.504195  295123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.90336598s)
	I1002 06:42:34.504212  295123 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 06:42:34.505008  295123 node_ready.go:35] waiting up to 6m0s for node "addons-067378" to be "Ready" ...
	I1002 06:42:34.714164  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:42:34.714193  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 06:42:34.909666  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:42:34.909688  295123 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:42:35.009973  295123 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-067378" context rescaled to 1 replicas
	I1002 06:42:35.051177  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:42:35.051200  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:42:35.233925  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:42:35.233953  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:42:35.436746  295123 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:42:35.436773  295123 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:42:35.661704  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:42:36.313343  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.853273797s)
	I1002 06:42:36.313420  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.848951786s)
	I1002 06:42:36.313446  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.74087196s)
	W1002 06:42:36.509129  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:36.924065  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.331668218s)
	I1002 06:42:36.924269  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.267308621s)
	I1002 06:42:36.924304  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.256645339s)
	I1002 06:42:37.049269  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.309831751s)
	I1002 06:42:37.049356  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.216001293s)
	I1002 06:42:37.049373  295123 addons.go:479] Verifying addon registry=true in "addons-067378"
	I1002 06:42:37.053079  295123 out.go:179] * Verifying registry addon...
	I1002 06:42:37.056741  295123 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:42:37.109029  295123 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:42:37.109049  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.212257  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.373411174s)
	W1002 06:42:37.212289  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:37.212311  295123 retry.go:31] will retry after 342.5556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:37.555710  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:37.647765  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.077073  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:42:38.526399  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:38.559700  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.627696679s)
	I1002 06:42:38.559734  295123 addons.go:479] Verifying addon ingress=true in "addons-067378"
	I1002 06:42:38.560051  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.539320893s)
	I1002 06:42:38.560074  295123 addons.go:479] Verifying addon metrics-server=true in "addons-067378"
	I1002 06:42:38.560210  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.198450988s)
	W1002 06:42:38.560239  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:42:38.560256  295123 retry.go:31] will retry after 222.959417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:42:38.560372  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.390656007s)
	I1002 06:42:38.563230  295123 out.go:179] * Verifying ingress addon...
	I1002 06:42:38.565276  295123 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-067378 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:42:38.567860  295123 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:42:38.574149  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.574581  295123 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:42:38.574627  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:38.784210  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:42:39.051551  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.389750517s)
	I1002 06:42:39.051738  295123 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-067378"
	I1002 06:42:39.051694  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.495892999s)
	W1002 06:42:39.051817  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:39.051856  295123 retry.go:31] will retry after 398.626076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:39.055437  295123 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:42:39.059303  295123 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:42:39.086636  295123 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:42:39.086707  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:39.087180  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:39.092780  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.451007  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:39.564290  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.564909  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:39.571305  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:40.052927  295123 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:42:40.053084  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:40.069804  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.069876  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:40.086205  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:40.091006  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:42:40.213741  295123 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:42:40.230739  295123 addons.go:238] Setting addon gcp-auth=true in "addons-067378"
	I1002 06:42:40.230791  295123 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:42:40.231261  295123 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:42:40.251934  295123 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:42:40.251993  295123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:42:40.290756  295123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	W1002 06:42:40.387242  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:40.387270  295123 retry.go:31] will retry after 574.738578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:40.391514  295123 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:42:40.394306  295123 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:42:40.397096  295123 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:42:40.397115  295123 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:42:40.410446  295123 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:42:40.410471  295123 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:42:40.423035  295123 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:42:40.423056  295123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:42:40.436365  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:42:40.562273  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.564681  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:40.572671  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:40.917818  295123 addons.go:479] Verifying addon gcp-auth=true in "addons-067378"
	I1002 06:42:40.920784  295123 out.go:179] * Verifying gcp-auth addon...
	I1002 06:42:40.924486  295123 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:42:40.928599  295123 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:42:40.928667  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:40.962756  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:42:41.008187  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:41.073467  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.073680  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:41.078183  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:41.428650  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:41.560662  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.563286  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:41.571376  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:42:41.779246  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:41.779279  295123 retry.go:31] will retry after 957.626623ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:41.928146  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:42.060368  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.063296  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:42.077146  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:42.429281  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:42.560699  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.564643  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:42.571269  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:42.737716  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:42.928423  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:43.010929  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:43.060062  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.062509  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:43.071984  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:43.428095  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:43.543988  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:43.544071  295123 retry.go:31] will retry after 746.606443ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:43.562301  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:43.562834  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.571996  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:43.928274  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:44.060936  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.064415  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:44.071491  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:44.291843  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:44.427582  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:44.566495  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.566989  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:44.572284  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:44.928703  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:45.024235  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:45.061816  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.065355  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:45.099768  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:45.351808  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.059915661s)
	W1002 06:42:45.351924  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:45.351979  295123 retry.go:31] will retry after 1.77210152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:45.433371  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:45.560417  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.563394  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:45.572185  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:45.928347  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:46.061937  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.063034  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:46.072508  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:46.427775  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:46.560918  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.562864  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:46.570722  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:46.927285  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:47.060666  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.062901  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:47.071908  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:47.124991  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:47.428767  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:47.509184  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:47.566638  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.569510  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:47.571568  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:42:47.918869  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:47.918912  295123 retry.go:31] will retry after 1.841110372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:47.927945  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:48.060381  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.063710  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:48.076876  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:48.428121  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:48.560242  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.561907  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:48.570697  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:48.928266  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:49.059559  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.062708  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:49.072233  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:49.428283  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:49.560727  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.562603  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:49.571576  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:49.761057  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:49.927528  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:50.013054  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:50.062146  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.064287  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:50.071946  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:50.428159  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:50.562035  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.563971  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:50.571207  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:42:50.585648  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:50.585683  295123 retry.go:31] will retry after 5.495287107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:50.928305  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:51.060341  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.062472  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:51.073209  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:51.427688  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:51.560089  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.562293  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:51.571525  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:51.927833  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:52.020845  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:52.060126  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.062689  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:52.071673  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:52.428080  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:52.561743  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.563823  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:52.570668  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:52.928177  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:53.060011  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.061773  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:53.071673  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:53.428134  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:53.561981  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.562982  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:53.571117  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:53.928068  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:54.059766  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.061995  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:54.071727  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:54.427359  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:54.508229  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:54.562342  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.564089  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:54.571065  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:54.928805  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:55.060563  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.062902  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:55.076558  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:55.427804  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:55.560940  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.563409  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:55.571595  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:55.928759  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:56.060942  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.062727  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:56.076238  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:56.081130  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:42:56.427609  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:56.509252  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:56.564725  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.565340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:56.571679  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:42:56.874526  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:56.874563  295123 retry.go:31] will retry after 5.014714007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:42:56.928055  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:57.059855  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.061813  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:57.071964  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:57.428564  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:57.561972  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.562556  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:57.574399  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:57.928079  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:58.061184  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.063164  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:58.075651  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:58.427318  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:58.560966  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.562728  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:58.571579  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:58.928425  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:42:59.010352  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:42:59.060875  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.063153  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:59.072224  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:59.427495  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:42:59.560555  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.562513  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:42:59.571618  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:42:59.927929  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:00.088117  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.088924  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:00.091832  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:00.429060  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:00.561808  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.563592  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:00.572420  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:00.927938  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:01.059984  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.061921  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:01.071575  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:01.427638  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:01.508149  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:01.560582  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.563203  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:01.571221  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:01.889528  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:43:01.928533  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:02.060586  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.063814  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:02.072836  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:02.428751  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:02.563786  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.565116  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:02.571027  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 06:43:02.758459  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:02.758496  295123 retry.go:31] will retry after 8.883761034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:02.927859  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:03.059726  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.062243  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:03.071707  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:03.428033  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:03.510944  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:03.560556  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.562610  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:03.572373  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:03.927894  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:04.060931  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.062872  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:04.070999  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:04.428283  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:04.561580  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.562902  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:04.571662  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:04.928827  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:05.059887  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.062124  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:05.071873  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:05.428009  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:05.561304  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.563686  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:05.571674  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:05.928227  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:06.010543  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:06.061466  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.062947  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:06.072581  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:06.427354  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:06.560924  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.563386  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:06.571040  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:06.927831  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:07.060081  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.062212  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:07.071262  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:07.428192  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:07.560752  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.562844  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:07.571659  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:07.927980  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:08.010858  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:08.060080  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.062301  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:08.076371  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:08.428309  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:08.562058  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.562328  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:08.571059  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:08.927806  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:09.059822  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.062756  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:09.072327  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:09.428457  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:09.562087  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:09.562296  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.571337  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:09.931470  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:10.017317  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:10.061683  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.062895  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:10.072321  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:10.428530  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:10.560309  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.562540  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:10.571473  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:10.927401  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:11.060754  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.062699  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:11.075770  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:11.427735  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:11.561353  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.563226  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:11.570962  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:11.643343  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:43:11.927864  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:12.062674  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.065240  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:12.071717  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:12.427869  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:43:12.455208  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:12.455243  295123 retry.go:31] will retry after 18.122148078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:43:12.508095  295123 node_ready.go:57] node "addons-067378" has "Ready":"False" status (will retry)
	I1002 06:43:12.560935  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.563425  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:12.571042  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:12.928123  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:13.059876  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.062047  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:13.071629  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:13.430177  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:13.523521  295123 node_ready.go:49] node "addons-067378" is "Ready"
	I1002 06:43:13.523552  295123 node_ready.go:38] duration metric: took 39.018514975s for node "addons-067378" to be "Ready" ...
	I1002 06:43:13.523567  295123 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:43:13.523628  295123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:43:13.548592  295123 api_server.go:72] duration metric: took 41.504408642s to wait for apiserver process to appear ...
	I1002 06:43:13.548620  295123 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:43:13.548640  295123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 06:43:13.578362  295123 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 06:43:13.589679  295123 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:43:13.589705  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:13.589837  295123 api_server.go:141] control plane version: v1.34.1
	I1002 06:43:13.589863  295123 api_server.go:131] duration metric: took 41.236606ms to wait for apiserver health ...
	I1002 06:43:13.589872  295123 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:43:13.590150  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:13.590221  295123 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:43:13.590234  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.696622  295123 system_pods.go:59] 19 kube-system pods found
	I1002 06:43:13.696658  295123 system_pods.go:61] "coredns-66bc5c9577-hqkgq" [842b83a7-7c09-4912-b9be-4ecce88ce7ca] Pending
	I1002 06:43:13.696668  295123 system_pods.go:61] "csi-hostpath-attacher-0" [10e37445-7bbb-44bc-9359-12524f894f88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:43:13.696673  295123 system_pods.go:61] "csi-hostpath-resizer-0" [863579f2-ece1-46c1-8f65-cdc2f410a1ab] Pending
	I1002 06:43:13.696679  295123 system_pods.go:61] "csi-hostpathplugin-g5rfp" [4dcebe4e-2c41-4731-a568-c47ea66b900d] Pending
	I1002 06:43:13.696684  295123 system_pods.go:61] "etcd-addons-067378" [0b35790c-32b5-4476-8519-d49ae2cf6f68] Running
	I1002 06:43:13.696688  295123 system_pods.go:61] "kindnet-rvljv" [3c704515-6f3d-45d5-a055-39afc813eeb5] Running
	I1002 06:43:13.696693  295123 system_pods.go:61] "kube-apiserver-addons-067378" [00be11a7-5cb7-4a64-8584-0d45b9b8057f] Running
	I1002 06:43:13.696698  295123 system_pods.go:61] "kube-controller-manager-addons-067378" [8450cb9e-1281-47df-964c-6ce56c609204] Running
	I1002 06:43:13.696704  295123 system_pods.go:61] "kube-ingress-dns-minikube" [57c3c67f-d7c1-4538-bb00-1a8cee5bee92] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:43:13.696713  295123 system_pods.go:61] "kube-proxy-glkj6" [245ca456-f1cb-4de2-bb7c-9cc322f5ab9d] Running
	I1002 06:43:13.696718  295123 system_pods.go:61] "kube-scheduler-addons-067378" [faf63f65-ae11-4b01-b3d3-6d71a1ad21ef] Running
	I1002 06:43:13.696730  295123 system_pods.go:61] "metrics-server-85b7d694d7-6x654" [0118f095-2060-4680-b4c9-c2c78976dda1] Pending
	I1002 06:43:13.696735  295123 system_pods.go:61] "nvidia-device-plugin-daemonset-kjxmr" [2391a5b9-29ae-4cd1-83fe-07aca873c5d1] Pending
	I1002 06:43:13.696740  295123 system_pods.go:61] "registry-66898fdd98-w2szx" [b634a53f-990a-4739-a9b3-2cf22c99e147] Pending
	I1002 06:43:13.696744  295123 system_pods.go:61] "registry-creds-764b6fb674-j77fn" [62c7e651-a525-434a-b3a2-67917ea0034f] Pending
	I1002 06:43:13.696754  295123 system_pods.go:61] "registry-proxy-zrq82" [76bc889e-53d2-4b4b-89a1-527536fef260] Pending
	I1002 06:43:13.696760  295123 system_pods.go:61] "snapshot-controller-7d9fbc56b8-57t4l" [564a4d92-8a32-4efe-917b-69afe2ecffa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:13.696767  295123 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vvfqw" [ec961b39-c695-47f2-bcfd-9196e9e451a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:13.696777  295123 system_pods.go:61] "storage-provisioner" [0b1f3ab3-a366-4164-97c6-d59947371157] Pending
	I1002 06:43:13.696784  295123 system_pods.go:74] duration metric: took 106.90342ms to wait for pod list to return data ...
	I1002 06:43:13.696792  295123 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:43:13.724220  295123 default_sa.go:45] found service account: "default"
	I1002 06:43:13.724252  295123 default_sa.go:55] duration metric: took 27.44824ms for default service account to be created ...
	I1002 06:43:13.724265  295123 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:43:13.749400  295123 system_pods.go:86] 19 kube-system pods found
	I1002 06:43:13.749436  295123 system_pods.go:89] "coredns-66bc5c9577-hqkgq" [842b83a7-7c09-4912-b9be-4ecce88ce7ca] Pending
	I1002 06:43:13.749448  295123 system_pods.go:89] "csi-hostpath-attacher-0" [10e37445-7bbb-44bc-9359-12524f894f88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:43:13.749454  295123 system_pods.go:89] "csi-hostpath-resizer-0" [863579f2-ece1-46c1-8f65-cdc2f410a1ab] Pending
	I1002 06:43:13.749459  295123 system_pods.go:89] "csi-hostpathplugin-g5rfp" [4dcebe4e-2c41-4731-a568-c47ea66b900d] Pending
	I1002 06:43:13.749463  295123 system_pods.go:89] "etcd-addons-067378" [0b35790c-32b5-4476-8519-d49ae2cf6f68] Running
	I1002 06:43:13.749467  295123 system_pods.go:89] "kindnet-rvljv" [3c704515-6f3d-45d5-a055-39afc813eeb5] Running
	I1002 06:43:13.749472  295123 system_pods.go:89] "kube-apiserver-addons-067378" [00be11a7-5cb7-4a64-8584-0d45b9b8057f] Running
	I1002 06:43:13.749476  295123 system_pods.go:89] "kube-controller-manager-addons-067378" [8450cb9e-1281-47df-964c-6ce56c609204] Running
	I1002 06:43:13.749487  295123 system_pods.go:89] "kube-ingress-dns-minikube" [57c3c67f-d7c1-4538-bb00-1a8cee5bee92] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:43:13.749495  295123 system_pods.go:89] "kube-proxy-glkj6" [245ca456-f1cb-4de2-bb7c-9cc322f5ab9d] Running
	I1002 06:43:13.749500  295123 system_pods.go:89] "kube-scheduler-addons-067378" [faf63f65-ae11-4b01-b3d3-6d71a1ad21ef] Running
	I1002 06:43:13.749506  295123 system_pods.go:89] "metrics-server-85b7d694d7-6x654" [0118f095-2060-4680-b4c9-c2c78976dda1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:43:13.749517  295123 system_pods.go:89] "nvidia-device-plugin-daemonset-kjxmr" [2391a5b9-29ae-4cd1-83fe-07aca873c5d1] Pending
	I1002 06:43:13.749521  295123 system_pods.go:89] "registry-66898fdd98-w2szx" [b634a53f-990a-4739-a9b3-2cf22c99e147] Pending
	I1002 06:43:13.749525  295123 system_pods.go:89] "registry-creds-764b6fb674-j77fn" [62c7e651-a525-434a-b3a2-67917ea0034f] Pending
	I1002 06:43:13.749536  295123 system_pods.go:89] "registry-proxy-zrq82" [76bc889e-53d2-4b4b-89a1-527536fef260] Pending
	I1002 06:43:13.749542  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57t4l" [564a4d92-8a32-4efe-917b-69afe2ecffa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:13.749549  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vvfqw" [ec961b39-c695-47f2-bcfd-9196e9e451a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:13.749559  295123 system_pods.go:89] "storage-provisioner" [0b1f3ab3-a366-4164-97c6-d59947371157] Pending
	I1002 06:43:13.749584  295123 retry.go:31] will retry after 241.662189ms: missing components: kube-dns
	I1002 06:43:13.950766  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:14.022829  295123 system_pods.go:86] 19 kube-system pods found
	I1002 06:43:14.022870  295123 system_pods.go:89] "coredns-66bc5c9577-hqkgq" [842b83a7-7c09-4912-b9be-4ecce88ce7ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:43:14.022879  295123 system_pods.go:89] "csi-hostpath-attacher-0" [10e37445-7bbb-44bc-9359-12524f894f88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:43:14.022885  295123 system_pods.go:89] "csi-hostpath-resizer-0" [863579f2-ece1-46c1-8f65-cdc2f410a1ab] Pending
	I1002 06:43:14.022893  295123 system_pods.go:89] "csi-hostpathplugin-g5rfp" [4dcebe4e-2c41-4731-a568-c47ea66b900d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:43:14.022898  295123 system_pods.go:89] "etcd-addons-067378" [0b35790c-32b5-4476-8519-d49ae2cf6f68] Running
	I1002 06:43:14.022903  295123 system_pods.go:89] "kindnet-rvljv" [3c704515-6f3d-45d5-a055-39afc813eeb5] Running
	I1002 06:43:14.022907  295123 system_pods.go:89] "kube-apiserver-addons-067378" [00be11a7-5cb7-4a64-8584-0d45b9b8057f] Running
	I1002 06:43:14.022911  295123 system_pods.go:89] "kube-controller-manager-addons-067378" [8450cb9e-1281-47df-964c-6ce56c609204] Running
	I1002 06:43:14.022918  295123 system_pods.go:89] "kube-ingress-dns-minikube" [57c3c67f-d7c1-4538-bb00-1a8cee5bee92] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:43:14.022922  295123 system_pods.go:89] "kube-proxy-glkj6" [245ca456-f1cb-4de2-bb7c-9cc322f5ab9d] Running
	I1002 06:43:14.022927  295123 system_pods.go:89] "kube-scheduler-addons-067378" [faf63f65-ae11-4b01-b3d3-6d71a1ad21ef] Running
	I1002 06:43:14.022934  295123 system_pods.go:89] "metrics-server-85b7d694d7-6x654" [0118f095-2060-4680-b4c9-c2c78976dda1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:43:14.022938  295123 system_pods.go:89] "nvidia-device-plugin-daemonset-kjxmr" [2391a5b9-29ae-4cd1-83fe-07aca873c5d1] Pending
	I1002 06:43:14.022945  295123 system_pods.go:89] "registry-66898fdd98-w2szx" [b634a53f-990a-4739-a9b3-2cf22c99e147] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:43:14.022954  295123 system_pods.go:89] "registry-creds-764b6fb674-j77fn" [62c7e651-a525-434a-b3a2-67917ea0034f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:43:14.022962  295123 system_pods.go:89] "registry-proxy-zrq82" [76bc889e-53d2-4b4b-89a1-527536fef260] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:43:14.022974  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57t4l" [564a4d92-8a32-4efe-917b-69afe2ecffa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:14.022980  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vvfqw" [ec961b39-c695-47f2-bcfd-9196e9e451a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:14.022985  295123 system_pods.go:89] "storage-provisioner" [0b1f3ab3-a366-4164-97c6-d59947371157] Pending
	I1002 06:43:14.023007  295123 retry.go:31] will retry after 298.767136ms: missing components: kube-dns
	I1002 06:43:14.097948  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:14.098391  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:14.100832  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.326750  295123 system_pods.go:86] 19 kube-system pods found
	I1002 06:43:14.326788  295123 system_pods.go:89] "coredns-66bc5c9577-hqkgq" [842b83a7-7c09-4912-b9be-4ecce88ce7ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:43:14.326803  295123 system_pods.go:89] "csi-hostpath-attacher-0" [10e37445-7bbb-44bc-9359-12524f894f88] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:43:14.326813  295123 system_pods.go:89] "csi-hostpath-resizer-0" [863579f2-ece1-46c1-8f65-cdc2f410a1ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:43:14.326820  295123 system_pods.go:89] "csi-hostpathplugin-g5rfp" [4dcebe4e-2c41-4731-a568-c47ea66b900d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:43:14.326831  295123 system_pods.go:89] "etcd-addons-067378" [0b35790c-32b5-4476-8519-d49ae2cf6f68] Running
	I1002 06:43:14.326836  295123 system_pods.go:89] "kindnet-rvljv" [3c704515-6f3d-45d5-a055-39afc813eeb5] Running
	I1002 06:43:14.326843  295123 system_pods.go:89] "kube-apiserver-addons-067378" [00be11a7-5cb7-4a64-8584-0d45b9b8057f] Running
	I1002 06:43:14.326847  295123 system_pods.go:89] "kube-controller-manager-addons-067378" [8450cb9e-1281-47df-964c-6ce56c609204] Running
	I1002 06:43:14.326855  295123 system_pods.go:89] "kube-ingress-dns-minikube" [57c3c67f-d7c1-4538-bb00-1a8cee5bee92] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:43:14.326868  295123 system_pods.go:89] "kube-proxy-glkj6" [245ca456-f1cb-4de2-bb7c-9cc322f5ab9d] Running
	I1002 06:43:14.326873  295123 system_pods.go:89] "kube-scheduler-addons-067378" [faf63f65-ae11-4b01-b3d3-6d71a1ad21ef] Running
	I1002 06:43:14.326879  295123 system_pods.go:89] "metrics-server-85b7d694d7-6x654" [0118f095-2060-4680-b4c9-c2c78976dda1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:43:14.326886  295123 system_pods.go:89] "nvidia-device-plugin-daemonset-kjxmr" [2391a5b9-29ae-4cd1-83fe-07aca873c5d1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:43:14.326898  295123 system_pods.go:89] "registry-66898fdd98-w2szx" [b634a53f-990a-4739-a9b3-2cf22c99e147] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:43:14.326910  295123 system_pods.go:89] "registry-creds-764b6fb674-j77fn" [62c7e651-a525-434a-b3a2-67917ea0034f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:43:14.326915  295123 system_pods.go:89] "registry-proxy-zrq82" [76bc889e-53d2-4b4b-89a1-527536fef260] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:43:14.326922  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-57t4l" [564a4d92-8a32-4efe-917b-69afe2ecffa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:14.326931  295123 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vvfqw" [ec961b39-c695-47f2-bcfd-9196e9e451a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:43:14.326936  295123 system_pods.go:89] "storage-provisioner" [0b1f3ab3-a366-4164-97c6-d59947371157] Running
	I1002 06:43:14.326947  295123 system_pods.go:126] duration metric: took 602.676668ms to wait for k8s-apps to be running ...
	I1002 06:43:14.326958  295123 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:43:14.327011  295123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:43:14.345372  295123 system_svc.go:56] duration metric: took 18.404198ms WaitForService to wait for kubelet
	I1002 06:43:14.345404  295123 kubeadm.go:586] duration metric: took 42.301222029s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:43:14.345424  295123 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:43:14.348581  295123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 06:43:14.348614  295123 node_conditions.go:123] node cpu capacity is 2
	I1002 06:43:14.348627  295123 node_conditions.go:105] duration metric: took 3.197799ms to run NodePressure ...
	I1002 06:43:14.348640  295123 start.go:241] waiting for startup goroutines ...
	I1002 06:43:14.428294  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:14.566133  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.566601  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:14.574186  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:14.928429  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:15.069815  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:15.071054  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.076967  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:15.428467  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:15.568748  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:15.569195  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.584160  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:15.929630  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:16.060340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.064947  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:16.072002  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:16.428800  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:16.564726  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.565322  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:16.571587  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:16.927766  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:17.063701  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.066486  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:17.073628  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:17.428091  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:17.560897  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.563681  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:17.571596  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:17.927785  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:18.062473  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.064241  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:18.071834  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:18.428853  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:18.560427  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.564741  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:18.571734  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:18.927703  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:19.061640  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.063360  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:19.075431  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:19.428018  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:19.560332  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.563413  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:19.571310  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:19.927714  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:20.060725  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.062806  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:20.071641  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:20.428218  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:20.561053  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.562947  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:20.571622  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:20.928311  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:21.060494  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.063024  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:21.071275  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:21.429147  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:21.564007  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.564340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:21.571948  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:21.929078  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:22.064312  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:22.064890  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.071476  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:22.428285  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:22.562815  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.564885  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:22.571647  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:22.928147  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:23.062182  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:23.065226  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:23.071702  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:23.429253  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:23.566410  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:23.567677  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:23.572252  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:23.928812  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:24.060890  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:24.063330  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:24.071627  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:24.428579  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:24.565466  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:24.565597  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:24.571150  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:24.928338  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:25.060416  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:25.063142  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:25.072183  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:25.428614  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:25.562347  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:25.565185  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:25.572210  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:25.928038  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:26.061704  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:26.064142  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:26.071628  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:26.428147  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:26.561940  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:26.565251  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:26.571646  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:26.928656  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:27.060949  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:27.064860  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:27.071995  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:27.428289  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:27.563067  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:27.563293  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:27.571521  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:27.928649  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:28.061832  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:28.064947  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:28.071500  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:28.428445  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:28.565905  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:28.566319  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:28.574137  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:28.928404  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:29.061077  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:29.064326  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:29.071471  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:29.428510  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:29.566557  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:29.573966  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:29.575963  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:29.928182  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:30.063437  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:30.067384  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:30.096974  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:30.429013  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:30.565585  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:30.566027  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:30.572366  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:30.577584  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:43:30.928569  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:31.061618  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:31.064841  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:31.071795  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:31.428352  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:31.569126  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:31.569271  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:31.572359  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:31.630113  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.052486797s)
	W1002 06:43:31.630155  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:31.630173  295123 retry.go:31] will retry after 18.374199675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:31.928306  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:32.060715  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:32.063338  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:32.071586  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:32.428456  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:32.565733  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:32.566203  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:32.571027  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:32.927739  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:33.060386  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:33.062507  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:33.072136  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:33.428511  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:33.563345  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:33.566163  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:33.572546  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:33.927818  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:34.060684  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:34.063728  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:34.071460  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:34.428854  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:34.559933  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:34.562721  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:34.571862  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:34.928082  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:35.060900  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:35.064134  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:35.071324  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:35.428515  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:35.561221  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:35.564079  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:35.570832  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:35.927642  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:36.063209  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:36.063323  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:36.072210  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:36.435399  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:36.562960  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:36.563485  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:36.571262  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:36.928483  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:37.061187  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:37.064228  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:37.076792  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:37.428807  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:37.561341  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:37.566559  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:37.571476  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:37.928265  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:38.061217  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:38.064181  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:38.071664  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:38.428541  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:38.561477  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:38.563549  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:38.572027  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:38.928787  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:39.060917  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:39.065402  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:39.071222  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:39.428608  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:39.565935  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:39.566239  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:39.571237  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:39.927425  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:40.060993  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:40.064323  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:40.071861  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:40.428645  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:40.560591  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:40.564977  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:40.572454  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:40.927723  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:41.064301  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:41.065269  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:41.072737  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:41.429176  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:41.561075  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:41.564632  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:41.571842  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:41.928196  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:42.065590  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:42.066004  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:42.077105  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:42.433561  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:42.561238  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:42.564536  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:42.571255  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:42.927687  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:43.061692  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:43.064577  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:43.071915  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:43.428426  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:43.562304  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:43.564143  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:43.571345  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:43.927841  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:44.061679  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:44.064295  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:44.072070  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:44.428585  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:44.569924  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:44.572010  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:44.572997  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:44.928493  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:45.072484  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:45.073173  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:45.120383  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:45.429836  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:45.561518  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:45.567750  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:45.572017  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:45.928443  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:46.063402  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:46.063915  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:46.072121  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:46.428970  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:46.562343  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:46.563107  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:46.571654  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:46.929523  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:47.062061  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:47.065903  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:47.078976  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:47.429226  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:47.560842  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:47.564516  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:47.572897  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:47.929712  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:48.063749  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:48.067374  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:48.072824  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:48.428303  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:48.563434  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:48.563908  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:48.664426  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:48.927696  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:49.059948  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:49.063000  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:49.071896  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:49.433245  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:49.562659  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:49.563406  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:49.571612  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:49.927881  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:50.009926  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:43:50.061421  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:50.083710  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:50.084045  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:50.428627  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:50.564618  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:50.564872  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:50.578361  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:50.929159  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:51.061104  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:51.064167  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:51.071695  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:51.173203  295123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.16321785s)
	W1002 06:43:51.173288  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:51.173355  295123 retry.go:31] will retry after 34.424856834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:43:51.432500  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:51.562806  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:51.565537  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:51.572067  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:51.928647  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:52.065994  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:52.066426  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:52.073304  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:52.436933  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:52.569980  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:52.581236  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:52.583715  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:52.928226  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:53.061648  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:53.068400  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:53.071425  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:53.437280  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:53.561026  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:53.571409  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:53.576514  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:53.928244  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:54.061626  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:54.063976  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:54.071304  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:54.429661  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:54.569286  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:54.570048  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:54.575280  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:54.927258  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:55.061784  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:55.065564  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:55.072220  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:55.437566  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:55.567061  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:55.570340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:55.571671  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:55.928276  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:56.062247  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:56.063319  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:56.072312  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:56.431466  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:56.562942  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:56.567247  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:56.571634  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:56.928071  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:57.061516  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:57.065121  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:57.072553  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:57.428613  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:57.565176  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:57.573233  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:57.578266  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:57.927823  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:58.060621  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:58.062435  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:58.072066  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:58.428512  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:58.562497  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:58.563716  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:58.571727  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:58.928836  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:59.059867  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:59.062396  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:59.071817  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:59.427895  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:43:59.563108  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:59.563253  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:43:59.571359  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:43:59.927537  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:00.062259  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:00.101600  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:00.104924  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:00.438341  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:00.564000  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:00.564969  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:00.572762  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:00.928202  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:01.066066  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:01.066319  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:01.077364  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:01.431001  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:01.567892  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:01.568239  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:01.574357  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:01.928480  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:02.061473  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:02.064718  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:02.072585  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:02.428290  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:02.560953  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:02.564528  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:02.572149  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:02.929998  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:03.061920  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:44:03.063755  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:03.073996  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:03.428568  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:03.566832  295123 kapi.go:107] duration metric: took 1m26.510080858s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 06:44:03.567143  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:03.571512  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:03.928365  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:04.063020  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:04.071550  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:04.427879  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:04.564155  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:04.582538  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:04.928416  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:05.063279  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:05.071445  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:05.449430  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:05.564260  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:05.571469  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:05.927818  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:06.064385  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:06.077019  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:06.428254  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:06.562192  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:06.571855  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:06.928255  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:07.062445  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:07.072063  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:07.428273  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:07.564318  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:07.572438  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:07.927556  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:08.062699  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:08.071796  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:08.428207  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:08.563996  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:08.580684  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:08.928573  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:09.063515  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:09.082292  295123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:44:09.428659  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:09.564643  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:09.573491  295123 kapi.go:107] duration metric: took 1m31.005629359s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 06:44:09.927858  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:10.063583  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:10.428000  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:10.580376  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:10.928089  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:11.064340  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:11.428337  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:11.565677  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:11.928143  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:12.064106  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:12.428399  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:12.563644  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:12.928281  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:13.066610  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:13.428276  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:13.566470  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:13.928829  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:14.064216  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:14.429315  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:14.563871  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:14.930850  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:15.064739  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:15.427835  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:15.562841  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:15.928347  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:16.066959  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:16.428435  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:16.563542  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:16.928100  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:17.064110  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:17.428398  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:17.564810  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:17.928542  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:18.064351  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:18.429938  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:18.562671  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:18.928333  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:19.062784  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:19.428162  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:19.562971  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:19.928478  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:20.063999  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:20.428804  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:20.567193  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:20.927631  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:21.081283  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:21.427786  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:44:21.564950  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:21.928607  295123 kapi.go:107] duration metric: took 1m41.004123767s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 06:44:21.930719  295123 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-067378 cluster.
	I1002 06:44:21.933534  295123 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 06:44:21.936551  295123 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 06:44:22.066203  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:22.566664  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:23.077012  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:23.563342  295123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:44:24.063700  295123 kapi.go:107] duration metric: took 1m45.004391753s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 06:44:25.599261  295123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:44:26.436237  295123 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:44:26.436332  295123 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:44:26.439440  295123 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, cloud-spanner, default-storageclass, registry-creds, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 06:44:26.442317  295123 addons.go:514] duration metric: took 1m54.397863216s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns cloud-spanner default-storageclass registry-creds nvidia-device-plugin storage-provisioner-rancher storage-provisioner metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 06:44:26.442407  295123 start.go:246] waiting for cluster config update ...
	I1002 06:44:26.442451  295123 start.go:255] writing updated cluster config ...
	I1002 06:44:26.442818  295123 ssh_runner.go:195] Run: rm -f paused
	I1002 06:44:26.446776  295123 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:44:26.450947  295123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hqkgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.457422  295123 pod_ready.go:94] pod "coredns-66bc5c9577-hqkgq" is "Ready"
	I1002 06:44:26.457447  295123 pod_ready.go:86] duration metric: took 6.472654ms for pod "coredns-66bc5c9577-hqkgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.459887  295123 pod_ready.go:83] waiting for pod "etcd-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.464776  295123 pod_ready.go:94] pod "etcd-addons-067378" is "Ready"
	I1002 06:44:26.464801  295123 pod_ready.go:86] duration metric: took 4.886916ms for pod "etcd-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.468445  295123 pod_ready.go:83] waiting for pod "kube-apiserver-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.473496  295123 pod_ready.go:94] pod "kube-apiserver-addons-067378" is "Ready"
	I1002 06:44:26.473526  295123 pod_ready.go:86] duration metric: took 5.052094ms for pod "kube-apiserver-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.476093  295123 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:26.851778  295123 pod_ready.go:94] pod "kube-controller-manager-addons-067378" is "Ready"
	I1002 06:44:26.851826  295123 pod_ready.go:86] duration metric: took 375.688634ms for pod "kube-controller-manager-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:27.051578  295123 pod_ready.go:83] waiting for pod "kube-proxy-glkj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:27.451442  295123 pod_ready.go:94] pod "kube-proxy-glkj6" is "Ready"
	I1002 06:44:27.451469  295123 pod_ready.go:86] duration metric: took 399.863968ms for pod "kube-proxy-glkj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:27.650629  295123 pod_ready.go:83] waiting for pod "kube-scheduler-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:28.050907  295123 pod_ready.go:94] pod "kube-scheduler-addons-067378" is "Ready"
	I1002 06:44:28.050935  295123 pod_ready.go:86] duration metric: took 400.277387ms for pod "kube-scheduler-addons-067378" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:44:28.050949  295123 pod_ready.go:40] duration metric: took 1.60413685s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:44:28.110717  295123 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 06:44:28.115803  295123 out.go:179] * Done! kubectl is now configured to use "addons-067378" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 06:44:26 addons-067378 crio[828]: time="2025-10-02T06:44:26.856731591Z" level=info msg="Stopped pod sandbox (already stopped): 3e74af7f59d1ca7d3d0c048dd145b59d4754e41088ad32175fcfb9f88bddeea4" id=e8471498-2742-4701-b0df-a29931d3a474 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 06:44:26 addons-067378 crio[828]: time="2025-10-02T06:44:26.857104534Z" level=info msg="Removing pod sandbox: 3e74af7f59d1ca7d3d0c048dd145b59d4754e41088ad32175fcfb9f88bddeea4" id=84f47985-7f23-4b94-8336-80fa4c23fcfc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 06:44:26 addons-067378 crio[828]: time="2025-10-02T06:44:26.864850267Z" level=info msg="Removed pod sandbox: 3e74af7f59d1ca7d3d0c048dd145b59d4754e41088ad32175fcfb9f88bddeea4" id=84f47985-7f23-4b94-8336-80fa4c23fcfc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.12994393Z" level=info msg="Running pod sandbox: default/busybox/POD" id=04df0588-0217-4880-9b5a-5f7aa46f4789 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.13003669Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.14404879Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:163a58ffcd5b300e4d60f93aa150785677b142b5d93e6074831916f8ba5d541f UID:387d8c6e-6b3d-4b66-b3f5-f1a69445358e NetNS:/var/run/netns/c62ee88f-47d4-4f0d-9b53-c7a33ebd82c8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000c354e8}] Aliases:map[]}"
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.144225784Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.161336121Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:163a58ffcd5b300e4d60f93aa150785677b142b5d93e6074831916f8ba5d541f UID:387d8c6e-6b3d-4b66-b3f5-f1a69445358e NetNS:/var/run/netns/c62ee88f-47d4-4f0d-9b53-c7a33ebd82c8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000c354e8}] Aliases:map[]}"
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.161670139Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.169044077Z" level=info msg="Ran pod sandbox 163a58ffcd5b300e4d60f93aa150785677b142b5d93e6074831916f8ba5d541f with infra container: default/busybox/POD" id=04df0588-0217-4880-9b5a-5f7aa46f4789 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.170543637Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d2c3c2b9-1c67-49b6-8252-90b9e1389b2c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.170686514Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d2c3c2b9-1c67-49b6-8252-90b9e1389b2c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.170728311Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d2c3c2b9-1c67-49b6-8252-90b9e1389b2c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.171658347Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=188ca3aa-7558-45a5-afff-d7a45eeebd20 name=/runtime.v1.ImageService/PullImage
	Oct 02 06:44:29 addons-067378 crio[828]: time="2025-10-02T06:44:29.173312172Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.233162594Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=188ca3aa-7558-45a5-afff-d7a45eeebd20 name=/runtime.v1.ImageService/PullImage
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.233710029Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6c9b936d-0309-411d-afd4-dc5805d61390 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.235340577Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=16c57f0b-3a98-4e88-ada5-9f437c1307a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.241350957Z" level=info msg="Creating container: default/busybox/busybox" id=daa9d011-eaa5-4eaf-905b-62f4b618a3d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.242126448Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.248534642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.249172137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.268139781Z" level=info msg="Created container 7004a722b720c9df516406668cf898d6d6d98533aacd9fe38b0a3c23d67d6b98: default/busybox/busybox" id=daa9d011-eaa5-4eaf-905b-62f4b618a3d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.269148471Z" level=info msg="Starting container: 7004a722b720c9df516406668cf898d6d6d98533aacd9fe38b0a3c23d67d6b98" id=9ae7a7aa-d919-4e89-8b9b-ba6bd359164c name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 06:44:31 addons-067378 crio[828]: time="2025-10-02T06:44:31.270814079Z" level=info msg="Started container" PID=4915 containerID=7004a722b720c9df516406668cf898d6d6d98533aacd9fe38b0a3c23d67d6b98 description=default/busybox/busybox id=9ae7a7aa-d919-4e89-8b9b-ba6bd359164c name=/runtime.v1.RuntimeService/StartContainer sandboxID=163a58ffcd5b300e4d60f93aa150785677b142b5d93e6074831916f8ba5d541f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	7004a722b720c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   163a58ffcd5b3       busybox                                    default
	715051dd29f98       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          17 seconds ago       Running             csi-snapshotter                          0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	03a89e8a85aa8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 18 seconds ago       Running             gcp-auth                                 0                   7f3aa1d4e528e       gcp-auth-78565c9fb4-sf7d5                  gcp-auth
	d72616d82a4c6       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          21 seconds ago       Running             csi-provisioner                          0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	850c05bdc05e6       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            23 seconds ago       Running             liveness-probe                           0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	96695eb2b2b1c       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           24 seconds ago       Running             hostpath                                 0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	8e159425d0843       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                25 seconds ago       Running             node-driver-registrar                    0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	f0e88be7831a3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            27 seconds ago       Running             gadget                                   0                   84f50fa31f325       gadget-bvpt5                               gadget
	855ae3081a142       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             31 seconds ago       Running             controller                               0                   275a88e59c85e       ingress-nginx-controller-9cc49f96f-jv8pp   ingress-nginx
	35286e26bd2b2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              37 seconds ago       Running             registry-proxy                           0                   b568908f00078       registry-proxy-zrq82                       kube-system
	38a84c1da31e3       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               40 seconds ago       Running             cloud-spanner-emulator                   0                   32bfaeb6e64fc       cloud-spanner-emulator-85f6b7fc65-nt86x    default
	3caf90b5c6d09       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           44 seconds ago       Running             registry                                 0                   4e724c3202c9d       registry-66898fdd98-w2szx                  kube-system
	6c102718e7f7f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        47 seconds ago       Running             metrics-server                           0                   9d74966aa9792       metrics-server-85b7d694d7-6x654            kube-system
	8832f8099b85d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   49 seconds ago       Running             csi-external-health-monitor-controller   0                   fd3e3ed4b2778       csi-hostpathplugin-g5rfp                   kube-system
	69fbb8d36215a       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              51 seconds ago       Running             csi-resizer                              0                   8921485f2ce4d       csi-hostpath-resizer-0                     kube-system
	f0b36ca509d15       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               53 seconds ago       Running             minikube-ingress-dns                     0                   829467d1d5587       kube-ingress-dns-minikube                  kube-system
	e4e74e65e570a       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   63779bb51994a       csi-hostpath-attacher-0                    kube-system
	db9280fb3f8c3       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   0578acb842464       snapshot-controller-7d9fbc56b8-vvfqw       kube-system
	0cbf532af43dd       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   40d78e61eb145       snapshot-controller-7d9fbc56b8-57t4l       kube-system
	7a322d3dc58d8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              patch                                    0                   e69c408c3a50b       ingress-nginx-admission-patch-dqc9b        ingress-nginx
	1bc50c5a2a408       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   ae93b898b1aba       nvidia-device-plugin-daemonset-kjxmr       kube-system
	8319af6a35e19       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              create                                   0                   75efa1b378415       ingress-nginx-admission-create-sp78n       ingress-nginx
	b39b1b42acab5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   b344b92e7963d       local-path-provisioner-648f6765c9-mrnqw    local-path-storage
	35efe6f5a1350       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   28c7544b1ab82       yakd-dashboard-5ff678cb9-x6zz2             yakd-dashboard
	23849ffb383b4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   b20e4d911d382       storage-provisioner                        kube-system
	cf51374ee4e78       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   449ae6dc50fde       coredns-66bc5c9577-hqkgq                   kube-system
	8cfee21867a88       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   53d68908732bc       kindnet-rvljv                              kube-system
	28e97317d945c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   107b2ab53cae5       kube-proxy-glkj6                           kube-system
	26b745984d39c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   d3aece2f216d9       kube-scheduler-addons-067378               kube-system
	f91e161872e50       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   f2b21a23ed9a9       etcd-addons-067378                         kube-system
	4d452e796395f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   405fb14328cec       kube-apiserver-addons-067378               kube-system
	b06978953fd6c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   9f99f54465f46       kube-controller-manager-addons-067378      kube-system
	
	
	==> coredns [cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c] <==
	[INFO] 10.244.0.14:46231 - 44122 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000061309s
	[INFO] 10.244.0.14:46231 - 46228 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00226147s
	[INFO] 10.244.0.14:46231 - 15787 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001997467s
	[INFO] 10.244.0.14:46231 - 56632 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000118532s
	[INFO] 10.244.0.14:46231 - 47175 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000076169s
	[INFO] 10.244.0.14:59768 - 55084 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00014968s
	[INFO] 10.244.0.14:59768 - 54855 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000249233s
	[INFO] 10.244.0.14:36102 - 5607 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119443s
	[INFO] 10.244.0.14:36102 - 5171 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000105232s
	[INFO] 10.244.0.14:47236 - 50138 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000279453s
	[INFO] 10.244.0.14:47236 - 49702 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000144781s
	[INFO] 10.244.0.14:41715 - 33677 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001372731s
	[INFO] 10.244.0.14:41715 - 33496 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001496252s
	[INFO] 10.244.0.14:39111 - 1989 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000109269s
	[INFO] 10.244.0.14:39111 - 1560 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188637s
	[INFO] 10.244.0.21:52495 - 41689 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000259432s
	[INFO] 10.244.0.21:59231 - 46361 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000289471s
	[INFO] 10.244.0.21:49736 - 56506 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000270099s
	[INFO] 10.244.0.21:32826 - 34400 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000285763s
	[INFO] 10.244.0.21:51370 - 14220 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120305s
	[INFO] 10.244.0.21:51278 - 63662 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000288536s
	[INFO] 10.244.0.21:45331 - 33996 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002335128s
	[INFO] 10.244.0.21:37531 - 4837 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001878894s
	[INFO] 10.244.0.21:51175 - 37384 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001771562s
	[INFO] 10.244.0.21:43833 - 55114 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000916817s
	
	
	==> describe nodes <==
	Name:               addons-067378
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-067378
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-067378
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_42_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-067378
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-067378"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:42:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-067378
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 06:44:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 06:44:40 +0000   Thu, 02 Oct 2025 06:42:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 06:44:40 +0000   Thu, 02 Oct 2025 06:42:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 06:44:40 +0000   Thu, 02 Oct 2025 06:42:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 06:44:40 +0000   Thu, 02 Oct 2025 06:43:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-067378
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c49bef4fc834808b914a36b06dbf372
	  System UUID:                2f1814d6-1357-446a-b78d-d0dacf031115
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-85f6b7fc65-nt86x     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  gadget                      gadget-bvpt5                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gcp-auth                    gcp-auth-78565c9fb4-sf7d5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-jv8pp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m2s
	  kube-system                 coredns-66bc5c9577-hqkgq                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m8s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpathplugin-g5rfp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 etcd-addons-067378                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m13s
	  kube-system                 kindnet-rvljv                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m9s
	  kube-system                 kube-apiserver-addons-067378                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-controller-manager-addons-067378       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-glkj6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-scheduler-addons-067378                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 metrics-server-85b7d694d7-6x654             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m3s
	  kube-system                 nvidia-device-plugin-daemonset-kjxmr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 registry-66898fdd98-w2szx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 registry-creds-764b6fb674-j77fn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 registry-proxy-zrq82                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 snapshot-controller-7d9fbc56b8-57t4l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-vvfqw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  local-path-storage          local-path-provisioner-648f6765c9-mrnqw     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-x6zz2              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m7s                   kube-proxy       
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node addons-067378 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node addons-067378 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s (x8 over 2m20s)  kubelet          Node addons-067378 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m14s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m13s                  kubelet          Node addons-067378 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m13s                  kubelet          Node addons-067378 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m13s                  kubelet          Node addons-067378 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m10s                  node-controller  Node addons-067378 event: Registered Node addons-067378 in Controller
	  Normal   NodeReady                87s                    kubelet          Node addons-067378 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d] <==
	{"level":"warn","ts":"2025-10-02T06:42:22.843454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.853201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.871004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.893250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.904913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.921215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.938286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.958987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.977215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:22.989581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.006283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.032945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.055873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.070721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.096383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.123999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.140892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.158898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:23.241398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:39.215408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:42:39.231745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:43:00.831153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:43:00.846072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:43:00.951736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:43:00.967064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38046","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [03a89e8a85aa8b6afaf5fac71d171429d214ab40fa4e857d0f32ec4ed024d9dd] <==
	2025/10/02 06:44:21 GCP Auth Webhook started!
	2025/10/02 06:44:28 Ready to marshal response ...
	2025/10/02 06:44:28 Ready to write response ...
	2025/10/02 06:44:28 Ready to marshal response ...
	2025/10/02 06:44:28 Ready to write response ...
	2025/10/02 06:44:28 Ready to marshal response ...
	2025/10/02 06:44:28 Ready to write response ...
	
	
	==> kernel <==
	 06:44:40 up  1:27,  0 user,  load average: 3.06, 3.22, 3.35
	Linux addons-067378 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c] <==
	E1002 06:43:03.109387       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 06:43:03.109497       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 06:43:03.109603       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 06:43:03.109680       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 06:43:04.209462       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 06:43:04.209497       1 metrics.go:72] Registering metrics
	I1002 06:43:04.209569       1 controller.go:711] "Syncing nftables rules"
	I1002 06:43:13.111969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:43:13.112016       1 main.go:301] handling current node
	I1002 06:43:23.109140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:43:23.109170       1 main.go:301] handling current node
	I1002 06:43:33.111171       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:43:33.111219       1 main.go:301] handling current node
	I1002 06:43:43.111208       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:43:43.111236       1 main.go:301] handling current node
	I1002 06:43:53.108276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:43:53.108348       1 main.go:301] handling current node
	I1002 06:44:03.108147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:44:03.108282       1 main.go:301] handling current node
	I1002 06:44:13.109060       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:44:13.109107       1 main.go:301] handling current node
	I1002 06:44:23.108117       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:44:23.108150       1 main.go:301] handling current node
	I1002 06:44:33.108193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:44:33.108304       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5] <==
	I1002 06:43:37.913195       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 06:44:05.468948       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:44:05.469030       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1002 06:44:05.469678       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.30.166:443: connect: connection refused" logger="UnhandledError"
	E1002 06:44:05.471391       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.30.166:443: connect: connection refused" logger="UnhandledError"
	W1002 06:44:06.469327       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:44:06.469441       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 06:44:06.469463       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 06:44:06.469542       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:44:06.469566       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 06:44:06.470641       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1002 06:44:10.484077       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.30.166:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1002 06:44:10.484739       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 06:44:10.484791       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 06:44:10.532063       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 06:44:38.399827       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57258: use of closed network connection
	
	
	==> kube-controller-manager [b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c] <==
	I1002 06:42:30.836266       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 06:42:30.836598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 06:42:30.836790       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 06:42:30.837881       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 06:42:30.863608       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-067378" podCIDRs=["10.244.0.0/24"]
	I1002 06:42:30.863757       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 06:42:30.864335       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 06:42:30.911177       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 06:42:30.911268       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 06:42:30.911299       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 06:42:30.933073       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 06:42:37.034469       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 06:42:37.065709       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 06:43:00.824509       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:43:00.824660       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 06:43:00.824717       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 06:43:00.925284       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:43:00.940570       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 06:43:00.944709       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 06:43:01.045355       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 06:43:15.833417       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 06:43:30.931311       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:43:31.055416       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1002 06:44:00.935344       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:44:01.068524       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f] <==
	I1002 06:42:32.886883       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:42:33.084445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:42:33.184839       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:42:33.184870       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:42:33.184944       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:42:33.260355       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:42:33.260406       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:42:33.280788       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:42:33.281079       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:42:33.281093       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:42:33.286416       1 config.go:200] "Starting service config controller"
	I1002 06:42:33.298085       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:42:33.298104       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:42:33.294623       1 config.go:309] "Starting node config controller"
	I1002 06:42:33.298137       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:42:33.298148       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:42:33.294266       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:42:33.298155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:42:33.298160       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 06:42:33.294278       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:42:33.298205       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:42:33.298210       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348] <==
	E1002 06:42:24.091272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:42:24.091360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:42:24.091435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:42:24.095616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:42:24.095771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:42:24.095863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:42:24.095927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:42:24.096009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:42:24.096092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:42:24.096162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:42:24.096221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:42:24.096351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 06:42:24.096382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:42:24.910949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:42:24.975288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:42:24.978504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:42:25.019759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:42:25.035680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:42:25.054089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:42:25.188935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:42:25.200696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:42:25.240537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:42:25.267139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:42:25.444441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 06:42:28.074951       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 06:43:56 addons-067378 kubelet[1286]: I1002 06:43:56.403863    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-w2szx" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:44:00 addons-067378 kubelet[1286]: I1002 06:44:00.419624    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-85f6b7fc65-nt86x" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:44:00 addons-067378 kubelet[1286]: I1002 06:44:00.493608    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/cloud-spanner-emulator-85f6b7fc65-nt86x" podStartSLOduration=40.404318985 podStartE2EDuration="1m25.493570079s" podCreationTimestamp="2025-10-02 06:42:35 +0000 UTC" firstStartedPulling="2025-10-02 06:43:14.38794251 +0000 UTC m=+47.711676286" lastFinishedPulling="2025-10-02 06:43:59.477193595 +0000 UTC m=+92.800927380" observedRunningTime="2025-10-02 06:44:00.492786301 +0000 UTC m=+93.816520103" watchObservedRunningTime="2025-10-02 06:44:00.493570079 +0000 UTC m=+93.817303872"
	Oct 02 06:44:00 addons-067378 kubelet[1286]: I1002 06:44:00.495427    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-66898fdd98-w2szx" podStartSLOduration=43.736712701 podStartE2EDuration="1m24.495393514s" podCreationTimestamp="2025-10-02 06:42:36 +0000 UTC" firstStartedPulling="2025-10-02 06:43:14.379300186 +0000 UTC m=+47.703033971" lastFinishedPulling="2025-10-02 06:43:55.137980999 +0000 UTC m=+88.461714784" observedRunningTime="2025-10-02 06:43:55.420339126 +0000 UTC m=+88.744072919" watchObservedRunningTime="2025-10-02 06:44:00.495393514 +0000 UTC m=+93.819127299"
	Oct 02 06:44:01 addons-067378 kubelet[1286]: I1002 06:44:01.423431    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-85f6b7fc65-nt86x" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:44:03 addons-067378 kubelet[1286]: I1002 06:44:03.436431    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zrq82" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:44:03 addons-067378 kubelet[1286]: I1002 06:44:03.470423    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-zrq82" podStartSLOduration=2.067671576 podStartE2EDuration="50.470392697s" podCreationTimestamp="2025-10-02 06:43:13 +0000 UTC" firstStartedPulling="2025-10-02 06:43:14.410921873 +0000 UTC m=+47.734655658" lastFinishedPulling="2025-10-02 06:44:02.813642994 +0000 UTC m=+96.137376779" observedRunningTime="2025-10-02 06:44:03.464817143 +0000 UTC m=+96.788550928" watchObservedRunningTime="2025-10-02 06:44:03.470392697 +0000 UTC m=+96.794126482"
	Oct 02 06:44:04 addons-067378 kubelet[1286]: I1002 06:44:04.456361    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zrq82" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:44:09 addons-067378 kubelet[1286]: I1002 06:44:09.490656    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-jv8pp" podStartSLOduration=43.802346255 podStartE2EDuration="1m31.490637516s" podCreationTimestamp="2025-10-02 06:42:38 +0000 UTC" firstStartedPulling="2025-10-02 06:43:21.152086979 +0000 UTC m=+54.475820764" lastFinishedPulling="2025-10-02 06:44:08.840378232 +0000 UTC m=+102.164112025" observedRunningTime="2025-10-02 06:44:09.489725515 +0000 UTC m=+102.813459317" watchObservedRunningTime="2025-10-02 06:44:09.490637516 +0000 UTC m=+102.814371301"
	Oct 02 06:44:16 addons-067378 kubelet[1286]: I1002 06:44:16.694863    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-bvpt5" podStartSLOduration=70.187002389 podStartE2EDuration="1m39.694795886s" podCreationTimestamp="2025-10-02 06:42:37 +0000 UTC" firstStartedPulling="2025-10-02 06:43:43.454545207 +0000 UTC m=+76.778278992" lastFinishedPulling="2025-10-02 06:44:12.962338705 +0000 UTC m=+106.286072489" observedRunningTime="2025-10-02 06:44:13.532926481 +0000 UTC m=+106.856660274" watchObservedRunningTime="2025-10-02 06:44:16.694795886 +0000 UTC m=+110.018529671"
	Oct 02 06:44:16 addons-067378 kubelet[1286]: I1002 06:44:16.992579    1286 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 02 06:44:16 addons-067378 kubelet[1286]: I1002 06:44:16.992646    1286 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 02 06:44:17 addons-067378 kubelet[1286]: E1002 06:44:17.319630    1286 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 02 06:44:17 addons-067378 kubelet[1286]: E1002 06:44:17.319711    1286 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62c7e651-a525-434a-b3a2-67917ea0034f-gcr-creds podName:62c7e651-a525-434a-b3a2-67917ea0034f nodeName:}" failed. No retries permitted until 2025-10-02 06:45:21.319694061 +0000 UTC m=+174.643427854 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/62c7e651-a525-434a-b3a2-67917ea0034f-gcr-creds") pod "registry-creds-764b6fb674-j77fn" (UID: "62c7e651-a525-434a-b3a2-67917ea0034f") : secret "registry-creds-gcr" not found
	Oct 02 06:44:17 addons-067378 kubelet[1286]: W1002 06:44:17.658361    1286 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/crio-7f3aa1d4e528ed377399a56fe34762d92dc120e752336b91b156487f8b6a43c2 WatchSource:0}: Error finding container 7f3aa1d4e528ed377399a56fe34762d92dc120e752336b91b156487f8b6a43c2: Status 404 returned error can't find the container with id 7f3aa1d4e528ed377399a56fe34762d92dc120e752336b91b156487f8b6a43c2
	Oct 02 06:44:22 addons-067378 kubelet[1286]: I1002 06:44:22.046379    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-sf7d5" podStartSLOduration=98.271741049 podStartE2EDuration="1m42.046354854s" podCreationTimestamp="2025-10-02 06:42:40 +0000 UTC" firstStartedPulling="2025-10-02 06:44:17.663604904 +0000 UTC m=+110.987338688" lastFinishedPulling="2025-10-02 06:44:21.438218708 +0000 UTC m=+114.761952493" observedRunningTime="2025-10-02 06:44:21.604818706 +0000 UTC m=+114.928552507" watchObservedRunningTime="2025-10-02 06:44:22.046354854 +0000 UTC m=+115.370088647"
	Oct 02 06:44:22 addons-067378 kubelet[1286]: I1002 06:44:22.804509    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff26aeb-6b37-4eb9-91cd-529cdbb3f0d2" path="/var/lib/kubelet/pods/4ff26aeb-6b37-4eb9-91cd-529cdbb3f0d2/volumes"
	Oct 02 06:44:22 addons-067378 kubelet[1286]: I1002 06:44:22.805042    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a4ea126-9c36-4508-9b3f-e1821af0efa3" path="/var/lib/kubelet/pods/9a4ea126-9c36-4508-9b3f-e1821af0efa3/volumes"
	Oct 02 06:44:26 addons-067378 kubelet[1286]: I1002 06:44:26.823246    1286 scope.go:117] "RemoveContainer" containerID="a5e89b2e0b015dca41d90217d0d12729ab88227ad7d3cdf5c6c963b306375488"
	Oct 02 06:44:26 addons-067378 kubelet[1286]: I1002 06:44:26.833169    1286 scope.go:117] "RemoveContainer" containerID="fc4390aa683843d4d293a96a3b402c6c6a7ba7986f7fdc172912aad46e3533f4"
	Oct 02 06:44:26 addons-067378 kubelet[1286]: E1002 06:44:26.960232    1286 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ed80e81c573f77ed0cf28818b00be07533fb488a277a8f4b567f7d0947421240/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ed80e81c573f77ed0cf28818b00be07533fb488a277a8f4b567f7d0947421240/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 06:44:28 addons-067378 kubelet[1286]: I1002 06:44:28.818693    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-g5rfp" podStartSLOduration=7.250098962 podStartE2EDuration="1m15.818671906s" podCreationTimestamp="2025-10-02 06:43:13 +0000 UTC" firstStartedPulling="2025-10-02 06:43:14.370583959 +0000 UTC m=+47.694317744" lastFinishedPulling="2025-10-02 06:44:22.939156903 +0000 UTC m=+116.262890688" observedRunningTime="2025-10-02 06:44:23.620764963 +0000 UTC m=+116.944498756" watchObservedRunningTime="2025-10-02 06:44:28.818671906 +0000 UTC m=+122.142405691"
	Oct 02 06:44:28 addons-067378 kubelet[1286]: I1002 06:44:28.916013    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4jr5\" (UniqueName: \"kubernetes.io/projected/387d8c6e-6b3d-4b66-b3f5-f1a69445358e-kube-api-access-r4jr5\") pod \"busybox\" (UID: \"387d8c6e-6b3d-4b66-b3f5-f1a69445358e\") " pod="default/busybox"
	Oct 02 06:44:28 addons-067378 kubelet[1286]: I1002 06:44:28.916103    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/387d8c6e-6b3d-4b66-b3f5-f1a69445358e-gcp-creds\") pod \"busybox\" (UID: \"387d8c6e-6b3d-4b66-b3f5-f1a69445358e\") " pod="default/busybox"
	Oct 02 06:44:29 addons-067378 kubelet[1286]: W1002 06:44:29.168518    1286 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/be6899c5910e4392b67fe331f2cb316bf5c93fe8888c5d02910f6dffc2b70743/crio-163a58ffcd5b300e4d60f93aa150785677b142b5d93e6074831916f8ba5d541f WatchSource:0}: Error finding container 163a58ffcd5b300e4d60f93aa150785677b142b5d93e6074831916f8ba5d541f: Status 404 returned error can't find the container with id 163a58ffcd5b300e4d60f93aa150785677b142b5d93e6074831916f8ba5d541f
	
	
	==> storage-provisioner [23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237] <==
	W1002 06:44:16.656499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:18.659260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:18.665982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:20.673688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:20.678463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:22.683566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:22.691340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:24.695153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:24.699789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:26.703389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:26.708409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:28.711568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:28.717728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:30.720666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:30.728015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:32.732167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:32.736695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:34.740773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:34.745490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:36.748518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:36.752894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:38.756450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:38.762403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:40.765809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:44:40.773770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-067378 -n addons-067378
helpers_test.go:269: (dbg) Run:  kubectl --context addons-067378 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-sp78n ingress-nginx-admission-patch-dqc9b registry-creds-764b6fb674-j77fn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-067378 describe pod ingress-nginx-admission-create-sp78n ingress-nginx-admission-patch-dqc9b registry-creds-764b6fb674-j77fn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-067378 describe pod ingress-nginx-admission-create-sp78n ingress-nginx-admission-patch-dqc9b registry-creds-764b6fb674-j77fn: exit status 1 (88.779042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sp78n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dqc9b" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-j77fn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-067378 describe pod ingress-nginx-admission-create-sp78n ingress-nginx-admission-patch-dqc9b registry-creds-764b6fb674-j77fn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable headlamp --alsologtostderr -v=1: exit status 11 (286.296245ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:44:41.583396  301648 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:44:41.584295  301648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:41.584330  301648 out.go:374] Setting ErrFile to fd 2...
	I1002 06:44:41.584350  301648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:44:41.584659  301648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:44:41.584973  301648 mustload.go:65] Loading cluster: addons-067378
	I1002 06:44:41.585401  301648 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:41.585439  301648 addons.go:606] checking whether the cluster is paused
	I1002 06:44:41.585572  301648 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:44:41.585606  301648 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:44:41.586085  301648 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:44:41.608966  301648 ssh_runner.go:195] Run: systemctl --version
	I1002 06:44:41.609017  301648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:44:41.635803  301648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:44:41.733892  301648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:44:41.733991  301648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:44:41.764782  301648 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:44:41.764807  301648 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:44:41.764812  301648 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:44:41.764818  301648 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:44:41.764822  301648 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:44:41.764827  301648 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:44:41.764831  301648 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:44:41.764835  301648 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:44:41.764838  301648 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:44:41.764846  301648 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:44:41.764859  301648 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:44:41.764873  301648 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:44:41.764877  301648 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:44:41.764880  301648 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:44:41.764884  301648 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:44:41.764891  301648 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:44:41.764899  301648 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:44:41.764904  301648 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:44:41.764907  301648 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:44:41.764914  301648 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:44:41.764919  301648 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:44:41.764922  301648 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:44:41.764925  301648 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:44:41.764931  301648 cri.go:89] found id: ""
	I1002 06:44:41.764998  301648 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:44:41.782502  301648 out.go:203] 
	W1002 06:44:41.785407  301648 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:44:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:44:41.785447  301648 out.go:285] * 
	* 
	W1002 06:44:41.790403  301648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:44:41.793234  301648 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-nt86x" [7c079130-5639-4aa6-ab1a-ca3a35b30555] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003189327s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (251.292609ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:46:03.425454  303636 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:46:03.426395  303636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:46:03.426436  303636 out.go:374] Setting ErrFile to fd 2...
	I1002 06:46:03.426456  303636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:46:03.426752  303636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:46:03.427124  303636 mustload.go:65] Loading cluster: addons-067378
	I1002 06:46:03.427607  303636 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:46:03.427654  303636 addons.go:606] checking whether the cluster is paused
	I1002 06:46:03.427786  303636 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:46:03.427827  303636 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:46:03.428366  303636 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:46:03.446983  303636 ssh_runner.go:195] Run: systemctl --version
	I1002 06:46:03.447066  303636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:46:03.465872  303636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:46:03.562852  303636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:46:03.562947  303636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:46:03.595922  303636 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:46:03.595947  303636 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:46:03.595952  303636 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:46:03.595957  303636 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:46:03.595960  303636 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:46:03.595963  303636 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:46:03.595966  303636 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:46:03.595971  303636 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:46:03.595974  303636 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:46:03.595981  303636 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:46:03.595985  303636 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:46:03.595988  303636 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:46:03.595991  303636 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:46:03.595994  303636 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:46:03.595997  303636 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:46:03.596003  303636 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:46:03.596006  303636 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:46:03.596011  303636 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:46:03.596014  303636 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:46:03.596017  303636 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:46:03.596022  303636 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:46:03.596028  303636 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:46:03.596032  303636 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:46:03.596035  303636 cri.go:89] found id: ""
	I1002 06:46:03.596090  303636 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:46:03.611602  303636 out.go:203] 
	W1002 06:46:03.614572  303636 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:46:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:46:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:46:03.614594  303636 out.go:285] * 
	* 
	W1002 06:46:03.619574  303636 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:46:03.622527  303636 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-067378 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-067378 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-067378 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1088f6e7-5378-4fa8-9a46-fc34b39d4b3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1088f6e7-5378-4fa8-9a46-fc34b39d4b3c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1088f6e7-5378-4fa8-9a46-fc34b39d4b3c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003477203s
addons_test.go:967: (dbg) Run:  kubectl --context addons-067378 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 ssh "cat /opt/local-path-provisioner/pvc-5c575a42-27bf-44ea-b0d8-7a407f2814bc_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-067378 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-067378 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (248.084181ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:45:57.164400  303522 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:45:57.165299  303522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:57.165330  303522 out.go:374] Setting ErrFile to fd 2...
	I1002 06:45:57.165336  303522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:57.165614  303522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:45:57.165962  303522 mustload.go:65] Loading cluster: addons-067378
	I1002 06:45:57.166329  303522 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:57.166346  303522 addons.go:606] checking whether the cluster is paused
	I1002 06:45:57.166448  303522 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:57.166468  303522 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:45:57.166965  303522 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:45:57.184811  303522 ssh_runner.go:195] Run: systemctl --version
	I1002 06:45:57.184873  303522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:45:57.206523  303522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:45:57.301597  303522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:45:57.301680  303522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:45:57.330979  303522 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:45:57.331008  303522 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:45:57.331013  303522 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:45:57.331017  303522 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:45:57.331020  303522 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:45:57.331024  303522 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:45:57.331027  303522 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:45:57.331029  303522 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:45:57.331036  303522 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:45:57.331043  303522 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:45:57.331047  303522 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:45:57.331050  303522 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:45:57.331053  303522 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:45:57.331056  303522 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:45:57.331059  303522 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:45:57.331064  303522 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:45:57.331071  303522 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:45:57.331075  303522 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:45:57.331101  303522 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:45:57.331106  303522 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:45:57.331111  303522 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:45:57.331114  303522 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:45:57.331118  303522 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:45:57.331121  303522 cri.go:89] found id: ""
	I1002 06:45:57.331171  303522 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:45:57.345961  303522 out.go:203] 
	W1002 06:45:57.348934  303522 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:45:57.348956  303522 out.go:285] * 
	* 
	W1002 06:45:57.353918  303522 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:45:57.357165  303522 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-kjxmr" [2391a5b9-29ae-4cd1-83fe-07aca873c5d1] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003526386s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (256.195108ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:45:43.472673  303165 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:45:43.473410  303165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:43.473427  303165 out.go:374] Setting ErrFile to fd 2...
	I1002 06:45:43.473435  303165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:43.473716  303165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:45:43.474023  303165 mustload.go:65] Loading cluster: addons-067378
	I1002 06:45:43.474402  303165 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:43.474421  303165 addons.go:606] checking whether the cluster is paused
	I1002 06:45:43.474525  303165 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:43.474545  303165 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:45:43.475005  303165 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:45:43.493389  303165 ssh_runner.go:195] Run: systemctl --version
	I1002 06:45:43.493457  303165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:45:43.518276  303165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:45:43.617706  303165 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:45:43.617816  303165 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:45:43.647687  303165 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:45:43.647712  303165 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:45:43.647718  303165 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:45:43.647722  303165 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:45:43.647726  303165 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:45:43.647730  303165 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:45:43.647733  303165 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:45:43.647736  303165 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:45:43.647740  303165 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:45:43.647751  303165 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:45:43.647758  303165 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:45:43.647762  303165 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:45:43.647766  303165 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:45:43.647769  303165 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:45:43.647773  303165 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:45:43.647780  303165 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:45:43.647787  303165 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:45:43.647792  303165 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:45:43.647795  303165 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:45:43.647798  303165 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:45:43.647803  303165 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:45:43.647807  303165 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:45:43.647810  303165 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:45:43.647813  303165 cri.go:89] found id: ""
	I1002 06:45:43.647869  303165 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:45:43.663981  303165 out.go:203] 
	W1002 06:45:43.667149  303165 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:45:43.667229  303165 out.go:285] * 
	* 
	W1002 06:45:43.672460  303165 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:45:43.675567  303165 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-x6zz2" [9c961a1a-3b8b-4d01-991e-813c7c786ee9] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003972336s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-067378 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-067378 addons disable yakd --alsologtostderr -v=1: exit status 11 (257.463988ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:45:48.737784  303223 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:45:48.738574  303223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:48.738591  303223 out.go:374] Setting ErrFile to fd 2...
	I1002 06:45:48.738597  303223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:45:48.738949  303223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:45:48.740052  303223 mustload.go:65] Loading cluster: addons-067378
	I1002 06:45:48.741140  303223 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:48.741170  303223 addons.go:606] checking whether the cluster is paused
	I1002 06:45:48.741308  303223 config.go:182] Loaded profile config "addons-067378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:45:48.741324  303223 host.go:66] Checking if "addons-067378" exists ...
	I1002 06:45:48.741791  303223 cli_runner.go:164] Run: docker container inspect addons-067378 --format={{.State.Status}}
	I1002 06:45:48.761715  303223 ssh_runner.go:195] Run: systemctl --version
	I1002 06:45:48.761782  303223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-067378
	I1002 06:45:48.780865  303223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/addons-067378/id_rsa Username:docker}
	I1002 06:45:48.877815  303223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 06:45:48.877950  303223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:45:48.910788  303223 cri.go:89] found id: "715051dd29f989af88cb0218761a443e90441e249c43236d94408c05b6361385"
	I1002 06:45:48.910860  303223 cri.go:89] found id: "d72616d82a4c6282ff84955e7662a5919ad095c736571517b2afac50c1df5b01"
	I1002 06:45:48.910880  303223 cri.go:89] found id: "850c05bdc05e667e20e67a2c3c0d67946a5f9562180447b3cd64048d2af533dc"
	I1002 06:45:48.910901  303223 cri.go:89] found id: "96695eb2b2b1c2c83d7f910930325d8044320ef43513d0d094b4ada89a7c6f47"
	I1002 06:45:48.910935  303223 cri.go:89] found id: "8e159425d084365526c27c04c557d352e9cab4574e03c24c996334f05e524c54"
	I1002 06:45:48.910953  303223 cri.go:89] found id: "35286e26bd2b2d7dd66f347cea8933ad13652a3e260f4ed55c03a51ba3f134d0"
	I1002 06:45:48.910975  303223 cri.go:89] found id: "3caf90b5c6d091bbb51bc4bb58596d418fdf6b7a39cf04270129e5fac5a929c3"
	I1002 06:45:48.910994  303223 cri.go:89] found id: "6c102718e7f7f3e4598ef786a896fbf0cd39c744911c8952c0f1cf2c70d14486"
	I1002 06:45:48.911021  303223 cri.go:89] found id: "8832f8099b85db1c99e648521f5e31854a0886cf65efa0d1c28920e313a22ca0"
	I1002 06:45:48.911041  303223 cri.go:89] found id: "69fbb8d36215a0b4533dfcd53cf85184eb3e3c86fe42e17f5acef43b983f418c"
	I1002 06:45:48.911065  303223 cri.go:89] found id: "f0b36ca509d15464e7e3b80c83b4acda55771dd125944621ebece2a441480879"
	I1002 06:45:48.911121  303223 cri.go:89] found id: "e4e74e65e570a9e15968cecfd6bc9beef2fd1d6e33a5abfaa596fdd6b1d416e7"
	I1002 06:45:48.911140  303223 cri.go:89] found id: "db9280fb3f8c354dd1e042e6e1e9fc6b99f6db8865def8600e1df6a68bdcb249"
	I1002 06:45:48.911154  303223 cri.go:89] found id: "0cbf532af43dd64287751fc680e5b9e97fbbbfa78702650da7c435cd2fd9c38e"
	I1002 06:45:48.911159  303223 cri.go:89] found id: "1bc50c5a2a408bc4dc63ba87cb7690c7dc3594d7fa9f7d2ae671142bb4671c5f"
	I1002 06:45:48.911164  303223 cri.go:89] found id: "23849ffb383b4542d85fb7b9f437ec3b52d8d957f753dedcd13fca1e2befd237"
	I1002 06:45:48.911167  303223 cri.go:89] found id: "cf51374ee4e780d8dbaf2ebb979d5ea7a1920b410077510d50ef29409b16351c"
	I1002 06:45:48.911173  303223 cri.go:89] found id: "8cfee21867a884fc0ffd50b594f19c28d4fa18d6a5c30ae9c524a68aa66f190c"
	I1002 06:45:48.911176  303223 cri.go:89] found id: "28e97317d945cc2738aa26350271929c795e077a19b95ec0e28c32aa2054761f"
	I1002 06:45:48.911179  303223 cri.go:89] found id: "26b745984d39c2936a801ae212393a7fc7ef4c80fb00cc1aece5bad483703348"
	I1002 06:45:48.911192  303223 cri.go:89] found id: "f91e161872e50bc4cc9774888bf9a62ea0ad0e6d55fc8a9a378e83ab1e3c2b0d"
	I1002 06:45:48.911198  303223 cri.go:89] found id: "4d452e796395f1f3dc772e2ed7bedfce8594a20411774a70028a84f3309da1d5"
	I1002 06:45:48.911202  303223 cri.go:89] found id: "b06978953fd6cdec60a348dedf557ca99590124005c9d7e20c231fc66897324c"
	I1002 06:45:48.911205  303223 cri.go:89] found id: ""
	I1002 06:45:48.911260  303223 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 06:45:48.926874  303223 out.go:203] 
	W1002 06:45:48.929794  303223 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:45:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 06:45:48.929821  303223 out.go:285] * 
	* 
	W1002 06:45:48.934935  303223 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 06:45:48.937838  303223 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-067378 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestForceSystemdFlag (513.62s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-275910 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-275910 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m29.886233782s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-275910] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-275910" primary control-plane node in "force-systemd-flag-275910" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:47:44.391425  463695 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:47:44.391615  463695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:47:44.391642  463695 out.go:374] Setting ErrFile to fd 2...
	I1002 07:47:44.391662  463695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:47:44.391968  463695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:47:44.392425  463695 out.go:368] Setting JSON to false
	I1002 07:47:44.393343  463695 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9016,"bootTime":1759382249,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:47:44.393438  463695 start.go:140] virtualization:  
	I1002 07:47:44.397174  463695 out.go:179] * [force-systemd-flag-275910] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:47:44.401513  463695 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:47:44.401650  463695 notify.go:220] Checking for updates...
	I1002 07:47:44.407707  463695 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:47:44.410764  463695 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:47:44.413923  463695 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:47:44.417562  463695 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:47:44.420590  463695 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:47:44.424016  463695 config.go:182] Loaded profile config "kubernetes-upgrade-011391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:47:44.424148  463695 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:47:44.456702  463695 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:47:44.456842  463695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:47:44.516992  463695 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:47:44.507918652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:47:44.517101  463695 docker.go:318] overlay module found
	I1002 07:47:44.520324  463695 out.go:179] * Using the docker driver based on user configuration
	I1002 07:47:44.523327  463695 start.go:304] selected driver: docker
	I1002 07:47:44.523348  463695 start.go:924] validating driver "docker" against <nil>
	I1002 07:47:44.523363  463695 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:47:44.524131  463695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:47:44.580455  463695 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:47:44.571447607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:47:44.580620  463695 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 07:47:44.580858  463695 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 07:47:44.583755  463695 out.go:179] * Using Docker driver with root privileges
	I1002 07:47:44.586589  463695 cni.go:84] Creating CNI manager for ""
	I1002 07:47:44.586665  463695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:47:44.586678  463695 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 07:47:44.586773  463695 start.go:348] cluster config:
	{Name:force-systemd-flag-275910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-275910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:47:44.589804  463695 out.go:179] * Starting "force-systemd-flag-275910" primary control-plane node in "force-systemd-flag-275910" cluster
	I1002 07:47:44.592572  463695 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:47:44.595452  463695 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:47:44.598220  463695 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:47:44.598279  463695 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:47:44.598291  463695 cache.go:58] Caching tarball of preloaded images
	I1002 07:47:44.598324  463695 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:47:44.598393  463695 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:47:44.598403  463695 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:47:44.598503  463695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/config.json ...
	I1002 07:47:44.598520  463695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/config.json: {Name:mk1363a8acf01e59fd341e737660bcc5f7b7022d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:44.617503  463695 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:47:44.617531  463695 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:47:44.617555  463695 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:47:44.617584  463695 start.go:360] acquireMachinesLock for force-systemd-flag-275910: {Name:mk4ca5e0e2a3cd53f63673e1e9aa1b8f52cf41b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:47:44.617692  463695 start.go:364] duration metric: took 86.318µs to acquireMachinesLock for "force-systemd-flag-275910"
	I1002 07:47:44.617725  463695 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-275910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-275910 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:47:44.617796  463695 start.go:125] createHost starting for "" (driver="docker")
	I1002 07:47:44.621144  463695 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 07:47:44.621400  463695 start.go:159] libmachine.API.Create for "force-systemd-flag-275910" (driver="docker")
	I1002 07:47:44.621453  463695 client.go:168] LocalClient.Create starting
	I1002 07:47:44.621528  463695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 07:47:44.621569  463695 main.go:141] libmachine: Decoding PEM data...
	I1002 07:47:44.621587  463695 main.go:141] libmachine: Parsing certificate...
	I1002 07:47:44.621649  463695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 07:47:44.621687  463695 main.go:141] libmachine: Decoding PEM data...
	I1002 07:47:44.621702  463695 main.go:141] libmachine: Parsing certificate...
	I1002 07:47:44.622102  463695 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 07:47:44.638753  463695 cli_runner.go:211] docker network inspect force-systemd-flag-275910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 07:47:44.638834  463695 network_create.go:284] running [docker network inspect force-systemd-flag-275910] to gather additional debugging logs...
	I1002 07:47:44.638855  463695 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275910
	W1002 07:47:44.656306  463695 cli_runner.go:211] docker network inspect force-systemd-flag-275910 returned with exit code 1
	I1002 07:47:44.656341  463695 network_create.go:287] error running [docker network inspect force-systemd-flag-275910]: docker network inspect force-systemd-flag-275910: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-275910 not found
	I1002 07:47:44.656356  463695 network_create.go:289] output of [docker network inspect force-systemd-flag-275910]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-275910 not found
	
	** /stderr **
	I1002 07:47:44.656487  463695 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:47:44.673640  463695 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 07:47:44.674134  463695 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 07:47:44.674293  463695 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 07:47:44.674601  463695 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66e185a7ccce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:9a:ff:9a:f9:e6:41} reservation:<nil>}
	I1002 07:47:44.675040  463695 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a547a0}
	I1002 07:47:44.675066  463695 network_create.go:124] attempt to create docker network force-systemd-flag-275910 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 07:47:44.675201  463695 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-275910 force-systemd-flag-275910
	I1002 07:47:44.731289  463695 network_create.go:108] docker network force-systemd-flag-275910 192.168.85.0/24 created
	I1002 07:47:44.731327  463695 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-275910" container
	I1002 07:47:44.731409  463695 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 07:47:44.752947  463695 cli_runner.go:164] Run: docker volume create force-systemd-flag-275910 --label name.minikube.sigs.k8s.io=force-systemd-flag-275910 --label created_by.minikube.sigs.k8s.io=true
	I1002 07:47:44.780723  463695 oci.go:103] Successfully created a docker volume force-systemd-flag-275910
	I1002 07:47:44.780813  463695 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-275910-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-275910 --entrypoint /usr/bin/test -v force-systemd-flag-275910:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 07:47:45.591326  463695 oci.go:107] Successfully prepared a docker volume force-systemd-flag-275910
	I1002 07:47:45.591384  463695 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:47:45.591406  463695 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 07:47:45.591481  463695 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-275910:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 07:47:50.028396  463695 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-275910:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.43686881s)
	I1002 07:47:50.028430  463695 kic.go:203] duration metric: took 4.437020244s to extract preloaded images to volume ...
	W1002 07:47:50.028614  463695 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 07:47:50.028746  463695 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 07:47:50.091192  463695 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-275910 --name force-systemd-flag-275910 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-275910 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-275910 --network force-systemd-flag-275910 --ip 192.168.85.2 --volume force-systemd-flag-275910:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 07:47:50.425892  463695 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275910 --format={{.State.Running}}
	I1002 07:47:50.450095  463695 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275910 --format={{.State.Status}}
	I1002 07:47:50.474963  463695 cli_runner.go:164] Run: docker exec force-systemd-flag-275910 stat /var/lib/dpkg/alternatives/iptables
	I1002 07:47:50.528033  463695 oci.go:144] the created container "force-systemd-flag-275910" has a running status.
	I1002 07:47:50.528077  463695 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-flag-275910/id_rsa...
	I1002 07:47:50.743928  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-flag-275910/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 07:47:50.743992  463695 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-flag-275910/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 07:47:50.794501  463695 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275910 --format={{.State.Status}}
	I1002 07:47:50.813985  463695 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 07:47:50.814009  463695 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-275910 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 07:47:50.897005  463695 cli_runner.go:164] Run: docker container inspect force-systemd-flag-275910 --format={{.State.Status}}
	I1002 07:47:50.926133  463695 machine.go:93] provisionDockerMachine start ...
	I1002 07:47:50.926241  463695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275910
	I1002 07:47:50.953176  463695 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:50.953529  463695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1002 07:47:50.953545  463695 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:47:50.954229  463695 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35784->127.0.0.1:33378: read: connection reset by peer
	I1002 07:47:54.095353  463695 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-275910
	
	I1002 07:47:54.095421  463695 ubuntu.go:182] provisioning hostname "force-systemd-flag-275910"
	I1002 07:47:54.095510  463695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275910
	I1002 07:47:54.112976  463695 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:54.113285  463695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1002 07:47:54.113297  463695 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-275910 && echo "force-systemd-flag-275910" | sudo tee /etc/hostname
	I1002 07:47:54.265245  463695 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-275910
	
	I1002 07:47:54.265352  463695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275910
	I1002 07:47:54.285547  463695 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:54.285865  463695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1002 07:47:54.285883  463695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-275910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-275910/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-275910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:47:54.431376  463695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:47:54.431406  463695 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:47:54.431426  463695 ubuntu.go:190] setting up certificates
	I1002 07:47:54.431435  463695 provision.go:84] configureAuth start
	I1002 07:47:54.431519  463695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275910
	I1002 07:47:54.452010  463695 provision.go:143] copyHostCerts
	I1002 07:47:54.452052  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:47:54.452085  463695 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:47:54.452097  463695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:47:54.452177  463695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:47:54.452278  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:47:54.452329  463695 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:47:54.452339  463695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:47:54.452368  463695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:47:54.452424  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:47:54.452451  463695 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:47:54.452461  463695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:47:54.452489  463695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:47:54.452552  463695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-275910 san=[127.0.0.1 192.168.85.2 force-systemd-flag-275910 localhost minikube]
	I1002 07:47:54.811154  463695 provision.go:177] copyRemoteCerts
	I1002 07:47:54.811276  463695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:47:54.811358  463695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275910
	I1002 07:47:54.844741  463695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-flag-275910/id_rsa Username:docker}
	I1002 07:47:54.946887  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:47:54.947013  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:47:54.965416  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:47:54.965491  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 07:47:54.983685  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:47:54.983801  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:47:55.016712  463695 provision.go:87] duration metric: took 585.216942ms to configureAuth
	I1002 07:47:55.016801  463695 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:47:55.017060  463695 config.go:182] Loaded profile config "force-systemd-flag-275910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:47:55.017231  463695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275910
	I1002 07:47:55.046876  463695 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:55.047302  463695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33378 <nil> <nil>}
	I1002 07:47:55.047326  463695 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:47:55.294409  463695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:47:55.294429  463695 machine.go:96] duration metric: took 4.368273868s to provisionDockerMachine
	I1002 07:47:55.294439  463695 client.go:171] duration metric: took 10.672975222s to LocalClient.Create
	I1002 07:47:55.294456  463695 start.go:167] duration metric: took 10.673057085s to libmachine.API.Create "force-systemd-flag-275910"
	I1002 07:47:55.294464  463695 start.go:293] postStartSetup for "force-systemd-flag-275910" (driver="docker")
	I1002 07:47:55.294477  463695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:47:55.294544  463695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:47:55.294589  463695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275910
	I1002 07:47:55.312748  463695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-flag-275910/id_rsa Username:docker}
	I1002 07:47:55.407513  463695 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:47:55.410964  463695 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:47:55.410994  463695 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:47:55.411007  463695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:47:55.411064  463695 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:47:55.411171  463695 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:47:55.411185  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:47:55.411295  463695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:47:55.418837  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:47:55.439142  463695 start.go:296] duration metric: took 144.658088ms for postStartSetup
	I1002 07:47:55.439529  463695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275910
	I1002 07:47:55.460194  463695 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/config.json ...
	I1002 07:47:55.460488  463695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:47:55.460542  463695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275910
	I1002 07:47:55.477567  463695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-flag-275910/id_rsa Username:docker}
	I1002 07:47:55.572032  463695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:47:55.576615  463695 start.go:128] duration metric: took 10.958802081s to createHost
	I1002 07:47:55.576638  463695 start.go:83] releasing machines lock for "force-systemd-flag-275910", held for 10.958932026s
	I1002 07:47:55.576709  463695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-275910
	I1002 07:47:55.593326  463695 ssh_runner.go:195] Run: cat /version.json
	I1002 07:47:55.593377  463695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275910
	I1002 07:47:55.593617  463695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:47:55.593684  463695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-275910
	I1002 07:47:55.612692  463695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-flag-275910/id_rsa Username:docker}
	I1002 07:47:55.620746  463695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33378 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-flag-275910/id_rsa Username:docker}
	I1002 07:47:55.702997  463695 ssh_runner.go:195] Run: systemctl --version
	I1002 07:47:55.793136  463695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:47:55.837371  463695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:47:55.841771  463695 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:47:55.841907  463695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:47:55.871997  463695 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 07:47:55.872092  463695 start.go:495] detecting cgroup driver to use...
	I1002 07:47:55.872121  463695 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1002 07:47:55.872214  463695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:47:55.890298  463695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:47:55.902982  463695 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:47:55.903047  463695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:47:55.921227  463695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:47:55.939495  463695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:47:56.062481  463695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:47:56.199882  463695 docker.go:234] disabling docker service ...
	I1002 07:47:56.200007  463695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:47:56.222847  463695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:47:56.236588  463695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:47:56.363314  463695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:47:56.480218  463695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:47:56.494786  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:47:56.509763  463695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:47:56.509861  463695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:56.519827  463695 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:47:56.519944  463695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:56.530704  463695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:56.541096  463695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:56.550905  463695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:47:56.559988  463695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:56.569442  463695 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:56.584404  463695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:56.593955  463695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:47:56.602166  463695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:47:56.609869  463695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:47:56.725158  463695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:47:56.863816  463695 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:47:56.863939  463695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:47:56.868734  463695 start.go:563] Will wait 60s for crictl version
	I1002 07:47:56.868813  463695 ssh_runner.go:195] Run: which crictl
	I1002 07:47:56.872938  463695 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:47:56.901021  463695 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:47:56.901117  463695 ssh_runner.go:195] Run: crio --version
	I1002 07:47:56.930175  463695 ssh_runner.go:195] Run: crio --version
	I1002 07:47:56.971943  463695 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:47:56.974705  463695 cli_runner.go:164] Run: docker network inspect force-systemd-flag-275910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:47:56.991356  463695 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 07:47:56.995344  463695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:47:57.005867  463695 kubeadm.go:883] updating cluster {Name:force-systemd-flag-275910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-275910 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:47:57.006017  463695 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:47:57.006085  463695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:47:57.040715  463695 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:47:57.040735  463695 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:47:57.040791  463695 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:47:57.074676  463695 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:47:57.074697  463695 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:47:57.074706  463695 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 07:47:57.074823  463695 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-275910 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-275910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:47:57.074908  463695 ssh_runner.go:195] Run: crio config
	I1002 07:47:57.138343  463695 cni.go:84] Creating CNI manager for ""
	I1002 07:47:57.138362  463695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:47:57.138379  463695 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:47:57.138402  463695 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-275910 NodeName:force-systemd-flag-275910 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:47:57.138518  463695 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-275910"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:47:57.138585  463695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:47:57.146473  463695 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:47:57.146570  463695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:47:57.154654  463695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1002 07:47:57.168124  463695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:47:57.182049  463695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1002 07:47:57.195845  463695 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:47:57.199553  463695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:47:57.209578  463695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:47:57.322675  463695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:47:57.339576  463695 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910 for IP: 192.168.85.2
	I1002 07:47:57.339600  463695 certs.go:195] generating shared ca certs ...
	I1002 07:47:57.339616  463695 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:57.339759  463695 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:47:57.339807  463695 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:47:57.339818  463695 certs.go:257] generating profile certs ...
	I1002 07:47:57.339874  463695 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/client.key
	I1002 07:47:57.339891  463695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/client.crt with IP's: []
	I1002 07:47:59.700613  463695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/client.crt ...
	I1002 07:47:59.700646  463695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/client.crt: {Name:mk8b048c53252db58be9eafc756e8e3fbfa3255a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:59.700915  463695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/client.key ...
	I1002 07:47:59.700935  463695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/client.key: {Name:mk4f3b8de2054f74ba0dbc3690551b91b09cf694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:59.701088  463695 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.key.c007a1a0
	I1002 07:47:59.701111  463695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.crt.c007a1a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 07:48:00.094027  463695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.crt.c007a1a0 ...
	I1002 07:48:00.094069  463695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.crt.c007a1a0: {Name:mk40d3989371234b5ac955582c69992d79bfbcbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:48:00.094272  463695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.key.c007a1a0 ...
	I1002 07:48:00.094286  463695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.key.c007a1a0: {Name:mk86eb953acd6832b03803f0f1aecff1469ea8cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:48:00.094362  463695 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.crt.c007a1a0 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.crt
	I1002 07:48:00.094451  463695 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.key.c007a1a0 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.key
	I1002 07:48:00.094524  463695 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.key
	I1002 07:48:00.094540  463695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.crt with IP's: []
	I1002 07:48:00.663990  463695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.crt ...
	I1002 07:48:00.664069  463695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.crt: {Name:mkf990147266097cb5ff91a562d04d88c08d200e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:48:00.664315  463695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.key ...
	I1002 07:48:00.664359  463695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.key: {Name:mk5b7b52f4b8a3ab564797c7e0c60732fad1c4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:48:00.664507  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:48:00.664556  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:48:00.664588  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:48:00.664630  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:48:00.664668  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:48:00.664699  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:48:00.664744  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:48:00.664779  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:48:00.664864  463695 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:48:00.664932  463695 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:48:00.664972  463695 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:48:00.665026  463695 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:48:00.665088  463695 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:48:00.665138  463695 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:48:00.665236  463695 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:48:00.665294  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:48:00.665338  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:48:00.665372  463695 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:48:00.665957  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:48:00.690275  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:48:00.716253  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:48:00.739589  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:48:00.761845  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 07:48:00.783310  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 07:48:00.806689  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:48:00.827876  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-flag-275910/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:48:00.848292  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:48:00.867057  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:48:00.889345  463695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:48:00.909449  463695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:48:00.923772  463695 ssh_runner.go:195] Run: openssl version
	I1002 07:48:00.931737  463695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:48:00.941499  463695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:48:00.946451  463695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:48:00.946544  463695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:48:00.989911  463695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:48:00.998412  463695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:48:01.007927  463695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:48:01.013015  463695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:48:01.013115  463695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:48:01.060190  463695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:48:01.073870  463695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:48:01.082704  463695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:48:01.087130  463695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:48:01.087229  463695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:48:01.134846  463695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:48:01.162965  463695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:48:01.175650  463695 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 07:48:01.175713  463695 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-275910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-275910 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:48:01.175803  463695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:48:01.175883  463695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:48:01.232325  463695 cri.go:89] found id: ""
	I1002 07:48:01.232427  463695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:48:01.244031  463695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:48:01.256357  463695 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:48:01.256472  463695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:48:01.272845  463695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:48:01.272917  463695 kubeadm.go:157] found existing configuration files:
	
	I1002 07:48:01.272990  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:48:01.281287  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:48:01.281363  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:48:01.289083  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:48:01.297892  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:48:01.298092  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:48:01.306318  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:48:01.314768  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:48:01.314851  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:48:01.323147  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:48:01.335210  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:48:01.335305  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:48:01.343958  463695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:48:01.388645  463695 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:48:01.389112  463695 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:48:01.413974  463695 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:48:01.414118  463695 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:48:01.414189  463695 kubeadm.go:318] OS: Linux
	I1002 07:48:01.414262  463695 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:48:01.414345  463695 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:48:01.414425  463695 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:48:01.414505  463695 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:48:01.414577  463695 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:48:01.414657  463695 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:48:01.414739  463695 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:48:01.414820  463695 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:48:01.414890  463695 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:48:01.486656  463695 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:48:01.486826  463695 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:48:01.486963  463695 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:48:01.500302  463695 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:48:01.506018  463695 out.go:252]   - Generating certificates and keys ...
	I1002 07:48:01.506145  463695 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:48:01.506274  463695 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:48:02.691439  463695 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 07:48:03.015617  463695 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 07:48:03.500709  463695 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 07:48:03.715116  463695 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 07:48:04.403457  463695 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 07:48:04.403825  463695 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275910 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 07:48:04.521505  463695 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 07:48:04.521911  463695 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275910 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 07:48:04.748906  463695 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 07:48:05.185010  463695 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 07:48:05.404286  463695 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 07:48:05.404495  463695 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:48:05.824565  463695 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:48:06.381495  463695 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:48:06.909138  463695 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:48:07.060313  463695 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:48:07.608969  463695 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:48:07.610198  463695 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:48:07.613708  463695 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:48:07.617349  463695 out.go:252]   - Booting up control plane ...
	I1002 07:48:07.617490  463695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:48:07.617597  463695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:48:07.619038  463695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:48:07.636749  463695 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:48:07.636879  463695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:48:07.644844  463695 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:48:07.645599  463695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:48:07.645925  463695 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:48:07.779542  463695 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:48:07.779696  463695 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:48:08.779677  463695 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001753356s
	I1002 07:48:08.783388  463695 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:48:08.783496  463695 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 07:48:08.783945  463695 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:48:08.784037  463695 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:52:08.784474  463695 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000008706s
	I1002 07:52:08.785844  463695 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00046466s
	I1002 07:52:08.785944  463695 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000279945s
	I1002 07:52:08.785951  463695 kubeadm.go:318] 
	I1002 07:52:08.786046  463695 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:52:08.786132  463695 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:52:08.786237  463695 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:52:08.786336  463695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:52:08.786414  463695 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:52:08.786505  463695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:52:08.786512  463695 kubeadm.go:318] 
	I1002 07:52:08.789061  463695 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:52:08.789401  463695 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:52:08.789537  463695 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:52:08.790288  463695 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 07:52:08.790393  463695 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 07:52:08.790564  463695 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275910 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275910 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001753356s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000008706s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00046466s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000279945s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275910 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275910 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001753356s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000008706s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00046466s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000279945s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 07:52:08.790649  463695 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 07:52:09.322541  463695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:52:09.336491  463695 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:52:09.336560  463695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:52:09.344817  463695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:52:09.344841  463695 kubeadm.go:157] found existing configuration files:
	
	I1002 07:52:09.344902  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:52:09.352743  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:52:09.352812  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:52:09.360741  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:52:09.369029  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:52:09.369101  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:52:09.376837  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:52:09.384770  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:52:09.384842  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:52:09.392659  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:52:09.401254  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:52:09.401322  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:52:09.409244  463695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:52:09.475761  463695 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:52:09.476021  463695 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:52:09.547346  463695 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:56:13.694350  463695 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:56:13.694521  463695 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:56:13.699421  463695 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:56:13.699489  463695 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:56:13.699595  463695 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:56:13.699662  463695 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:56:13.699704  463695 kubeadm.go:318] OS: Linux
	I1002 07:56:13.699759  463695 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:56:13.699816  463695 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:56:13.699873  463695 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:56:13.699931  463695 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:56:13.699989  463695 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:56:13.700048  463695 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:56:13.700110  463695 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:56:13.700169  463695 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:56:13.700224  463695 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:56:13.700308  463695 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:56:13.700417  463695 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:56:13.700525  463695 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:56:13.700600  463695 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:56:13.704682  463695 out.go:252]   - Generating certificates and keys ...
	I1002 07:56:13.704797  463695 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:56:13.704877  463695 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:56:13.704970  463695 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 07:56:13.705065  463695 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 07:56:13.705180  463695 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 07:56:13.705255  463695 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 07:56:13.705332  463695 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 07:56:13.705418  463695 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 07:56:13.705505  463695 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 07:56:13.705604  463695 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 07:56:13.705664  463695 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 07:56:13.705733  463695 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:56:13.705792  463695 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:56:13.705862  463695 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:56:13.705952  463695 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:56:13.706024  463695 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:56:13.706086  463695 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:56:13.706187  463695 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:56:13.706263  463695 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:56:13.709121  463695 out.go:252]   - Booting up control plane ...
	I1002 07:56:13.709218  463695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:56:13.709309  463695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:56:13.709385  463695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:56:13.709499  463695 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:56:13.709600  463695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:56:13.709713  463695 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:56:13.709805  463695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:56:13.709849  463695 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:56:13.709990  463695 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:56:13.710103  463695 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:56:13.710168  463695 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00147554s
	I1002 07:56:13.710268  463695 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:56:13.710365  463695 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 07:56:13.710463  463695 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:56:13.710549  463695 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:56:13.710628  463695 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	I1002 07:56:13.710716  463695 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	I1002 07:56:13.710796  463695 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	I1002 07:56:13.710804  463695 kubeadm.go:318] 
	I1002 07:56:13.710906  463695 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:56:13.710996  463695 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:56:13.711098  463695 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:56:13.711204  463695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:56:13.711286  463695 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:56:13.711376  463695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:56:13.711442  463695 kubeadm.go:402] duration metric: took 8m12.535733194s to StartCluster
	I1002 07:56:13.711493  463695 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:56:13.711566  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:56:13.711662  463695 kubeadm.go:318] 
	I1002 07:56:13.737176  463695 cri.go:89] found id: ""
	I1002 07:56:13.737212  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.737222  463695 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:56:13.737229  463695 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:56:13.737289  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:56:13.767318  463695 cri.go:89] found id: ""
	I1002 07:56:13.767352  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.767362  463695 logs.go:284] No container was found matching "etcd"
	I1002 07:56:13.767369  463695 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:56:13.767443  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:56:13.792397  463695 cri.go:89] found id: ""
	I1002 07:56:13.792423  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.792432  463695 logs.go:284] No container was found matching "coredns"
	I1002 07:56:13.792439  463695 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:56:13.792502  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:56:13.822490  463695 cri.go:89] found id: ""
	I1002 07:56:13.822515  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.822525  463695 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:56:13.822531  463695 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:56:13.822591  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:56:13.848792  463695 cri.go:89] found id: ""
	I1002 07:56:13.848824  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.848833  463695 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:56:13.848840  463695 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:56:13.848902  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:56:13.875585  463695 cri.go:89] found id: ""
	I1002 07:56:13.875610  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.875620  463695 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:56:13.875627  463695 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:56:13.875688  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:56:13.902744  463695 cri.go:89] found id: ""
	I1002 07:56:13.902781  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.902791  463695 logs.go:284] No container was found matching "kindnet"
	I1002 07:56:13.902801  463695 logs.go:123] Gathering logs for kubelet ...
	I1002 07:56:13.902813  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:56:13.995776  463695 logs.go:123] Gathering logs for dmesg ...
	I1002 07:56:13.995815  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:56:14.016851  463695 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:56:14.016885  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:56:14.088197  463695 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:56:14.079636    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.080166    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.082121    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.082830    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.083971    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:56:14.079636    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.080166    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.082121    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.082830    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.083971    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:56:14.088222  463695 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:56:14.088237  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:56:14.172472  463695 logs.go:123] Gathering logs for container status ...
	I1002 07:56:14.172513  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:56:14.201690  463695 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00147554s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:56:14.201752  463695 out.go:285] * 
	* 
	W1002 07:56:14.201842  463695 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00147554s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00147554s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:56:14.201863  463695 out.go:285] * 
	* 
	W1002 07:56:14.204168  463695 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:56:14.209929  463695 out.go:203] 
	W1002 07:56:14.212957  463695 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00147554s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00147554s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:56:14.212992  463695 out.go:285] * 
	* 
	I1002 07:56:14.216283  463695 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-275910 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-275910 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-02 07:56:14.565325788 +0000 UTC m=+4494.911605346
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-275910
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-275910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10f8380b38498e6f0348d851d8d331094d9b4bf06129ca11c935790291c6fd3f",
	        "Created": "2025-10-02T07:47:50.112740873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 464257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:47:50.207457781Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/10f8380b38498e6f0348d851d8d331094d9b4bf06129ca11c935790291c6fd3f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10f8380b38498e6f0348d851d8d331094d9b4bf06129ca11c935790291c6fd3f/hostname",
	        "HostsPath": "/var/lib/docker/containers/10f8380b38498e6f0348d851d8d331094d9b4bf06129ca11c935790291c6fd3f/hosts",
	        "LogPath": "/var/lib/docker/containers/10f8380b38498e6f0348d851d8d331094d9b4bf06129ca11c935790291c6fd3f/10f8380b38498e6f0348d851d8d331094d9b4bf06129ca11c935790291c6fd3f-json.log",
	        "Name": "/force-systemd-flag-275910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-275910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-275910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "10f8380b38498e6f0348d851d8d331094d9b4bf06129ca11c935790291c6fd3f",
	                "LowerDir": "/var/lib/docker/overlay2/cf8c15c43635b8f8600c7a835dec951efe5be3fc1fc29bb4bae6bb0c72db255f-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf8c15c43635b8f8600c7a835dec951efe5be3fc1fc29bb4bae6bb0c72db255f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf8c15c43635b8f8600c7a835dec951efe5be3fc1fc29bb4bae6bb0c72db255f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf8c15c43635b8f8600c7a835dec951efe5be3fc1fc29bb4bae6bb0c72db255f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-275910",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-275910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-275910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-275910",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-275910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ced72028432a1269e7ff1d60042c1201d6ea6472879ffefb776e554d3461dd33",
	            "SandboxKey": "/var/run/docker/netns/ced72028432a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33378"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33379"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33382"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33380"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33381"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-275910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:19:f0:9e:68:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cd37a6ebda70841f5cd707e286feaab0d96811b55ad457bf34c79ab9e0826866",
	                    "EndpointID": "448653a556fc2b5d925d7816a313bbdc142a8c60ea4e16828dcbb5779b8bfeeb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-275910",
	                        "10f8380b3849"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-275910 -n force-systemd-flag-275910
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-275910 -n force-systemd-flag-275910: exit status 6 (312.504836ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:56:14.886862  473958 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-275910" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-275910 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-810803 sudo systemctl cat kubelet --no-pager                                                     │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status docker --all --full --no-pager                                      │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat docker --no-pager                                                      │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/docker/daemon.json                                                          │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo docker system info                                                                   │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cri-dockerd --version                                                                │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat containerd --no-pager                                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/containerd/config.toml                                                      │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo containerd config dump                                                               │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status crio --all --full --no-pager                                        │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat crio --no-pager                                                        │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo crio config                                                                          │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ delete  │ -p cilium-810803                                                                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │ 02 Oct 25 07:49 UTC │
	│ start   │ -p force-systemd-env-297062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ force-systemd-flag-275910 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:49:39
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:49:39.523361  470112 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:49:39.523553  470112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:49:39.523581  470112 out.go:374] Setting ErrFile to fd 2...
	I1002 07:49:39.523601  470112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:49:39.524315  470112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:49:39.524811  470112 out.go:368] Setting JSON to false
	I1002 07:49:39.525688  470112 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9131,"bootTime":1759382249,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:49:39.525757  470112 start.go:140] virtualization:  
	I1002 07:49:39.529240  470112 out.go:179] * [force-systemd-env-297062] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:49:39.533057  470112 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:49:39.533180  470112 notify.go:220] Checking for updates...
	I1002 07:49:39.538878  470112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:49:39.541847  470112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:49:39.544867  470112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:49:39.547724  470112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:49:39.550628  470112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1002 07:49:39.554067  470112 config.go:182] Loaded profile config "force-systemd-flag-275910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:49:39.554181  470112 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:49:39.586416  470112 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:49:39.586593  470112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:49:39.646129  470112 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:49:39.636902499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:49:39.646239  470112 docker.go:318] overlay module found
	I1002 07:49:39.649357  470112 out.go:179] * Using the docker driver based on user configuration
	I1002 07:49:39.652242  470112 start.go:304] selected driver: docker
	I1002 07:49:39.652261  470112 start.go:924] validating driver "docker" against <nil>
	I1002 07:49:39.652275  470112 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:49:39.653041  470112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:49:39.712792  470112 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:49:39.70329887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:49:39.712948  470112 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 07:49:39.713182  470112 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 07:49:39.716191  470112 out.go:179] * Using Docker driver with root privileges
	I1002 07:49:39.719064  470112 cni.go:84] Creating CNI manager for ""
	I1002 07:49:39.719196  470112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:49:39.719212  470112 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 07:49:39.719299  470112 start.go:348] cluster config:
	{Name:force-systemd-env-297062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:49:39.722439  470112 out.go:179] * Starting "force-systemd-env-297062" primary control-plane node in "force-systemd-env-297062" cluster
	I1002 07:49:39.725285  470112 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:49:39.728234  470112 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:49:39.731018  470112 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:49:39.731106  470112 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:49:39.731113  470112 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:49:39.731121  470112 cache.go:58] Caching tarball of preloaded images
	I1002 07:49:39.731226  470112 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:49:39.731236  470112 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:49:39.731340  470112 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/config.json ...
	I1002 07:49:39.731365  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/config.json: {Name:mk246686f2a17d8558e63ddf32e6455f3f8b7ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:39.750041  470112 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:49:39.750065  470112 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:49:39.750092  470112 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:49:39.750115  470112 start.go:360] acquireMachinesLock for force-systemd-env-297062: {Name:mka6346f4f34ee7d4de2b8343e2733b1f08800ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:49:39.750220  470112 start.go:364] duration metric: took 85.564µs to acquireMachinesLock for "force-systemd-env-297062"
	I1002 07:49:39.750261  470112 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-297062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:49:39.750326  470112 start.go:125] createHost starting for "" (driver="docker")
	I1002 07:49:39.753760  470112 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 07:49:39.754004  470112 start.go:159] libmachine.API.Create for "force-systemd-env-297062" (driver="docker")
	I1002 07:49:39.754054  470112 client.go:168] LocalClient.Create starting
	I1002 07:49:39.754126  470112 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 07:49:39.754165  470112 main.go:141] libmachine: Decoding PEM data...
	I1002 07:49:39.754186  470112 main.go:141] libmachine: Parsing certificate...
	I1002 07:49:39.754251  470112 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 07:49:39.754280  470112 main.go:141] libmachine: Decoding PEM data...
	I1002 07:49:39.754293  470112 main.go:141] libmachine: Parsing certificate...
	I1002 07:49:39.754700  470112 cli_runner.go:164] Run: docker network inspect force-systemd-env-297062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 07:49:39.771015  470112 cli_runner.go:211] docker network inspect force-systemd-env-297062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 07:49:39.771132  470112 network_create.go:284] running [docker network inspect force-systemd-env-297062] to gather additional debugging logs...
	I1002 07:49:39.771154  470112 cli_runner.go:164] Run: docker network inspect force-systemd-env-297062
	W1002 07:49:39.788401  470112 cli_runner.go:211] docker network inspect force-systemd-env-297062 returned with exit code 1
	I1002 07:49:39.788435  470112 network_create.go:287] error running [docker network inspect force-systemd-env-297062]: docker network inspect force-systemd-env-297062: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-297062 not found
	I1002 07:49:39.788449  470112 network_create.go:289] output of [docker network inspect force-systemd-env-297062]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-297062 not found
	
	** /stderr **
	I1002 07:49:39.788564  470112 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:49:39.805904  470112 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 07:49:39.806289  470112 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 07:49:39.806457  470112 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 07:49:39.806938  470112 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ccdb0}
	I1002 07:49:39.806964  470112 network_create.go:124] attempt to create docker network force-systemd-env-297062 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 07:49:39.807025  470112 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-297062 force-systemd-env-297062
	I1002 07:49:39.875512  470112 network_create.go:108] docker network force-systemd-env-297062 192.168.76.0/24 created
	I1002 07:49:39.875548  470112 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-297062" container
	I1002 07:49:39.875645  470112 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 07:49:39.892431  470112 cli_runner.go:164] Run: docker volume create force-systemd-env-297062 --label name.minikube.sigs.k8s.io=force-systemd-env-297062 --label created_by.minikube.sigs.k8s.io=true
	I1002 07:49:39.909585  470112 oci.go:103] Successfully created a docker volume force-systemd-env-297062
	I1002 07:49:39.909689  470112 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-297062-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-297062 --entrypoint /usr/bin/test -v force-systemd-env-297062:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 07:49:40.494652  470112 oci.go:107] Successfully prepared a docker volume force-systemd-env-297062
	I1002 07:49:40.494706  470112 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:49:40.494726  470112 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 07:49:40.494814  470112 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-297062:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 07:49:44.938652  470112 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-297062:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.443780464s)
	I1002 07:49:44.938685  470112 kic.go:203] duration metric: took 4.443955277s to extract preloaded images to volume ...
	W1002 07:49:44.938830  470112 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 07:49:44.938933  470112 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 07:49:44.996025  470112 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-297062 --name force-systemd-env-297062 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-297062 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-297062 --network force-systemd-env-297062 --ip 192.168.76.2 --volume force-systemd-env-297062:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 07:49:45.529031  470112 cli_runner.go:164] Run: docker container inspect force-systemd-env-297062 --format={{.State.Running}}
	I1002 07:49:45.556780  470112 cli_runner.go:164] Run: docker container inspect force-systemd-env-297062 --format={{.State.Status}}
	I1002 07:49:45.584165  470112 cli_runner.go:164] Run: docker exec force-systemd-env-297062 stat /var/lib/dpkg/alternatives/iptables
	I1002 07:49:45.636911  470112 oci.go:144] the created container "force-systemd-env-297062" has a running status.
	I1002 07:49:45.636946  470112 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa...
	I1002 07:49:47.077898  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 07:49:47.077949  470112 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 07:49:47.097082  470112 cli_runner.go:164] Run: docker container inspect force-systemd-env-297062 --format={{.State.Status}}
	I1002 07:49:47.122236  470112 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 07:49:47.122290  470112 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-297062 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 07:49:47.165064  470112 cli_runner.go:164] Run: docker container inspect force-systemd-env-297062 --format={{.State.Status}}
	I1002 07:49:47.181545  470112 machine.go:93] provisionDockerMachine start ...
	I1002 07:49:47.181659  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:47.198951  470112 main.go:141] libmachine: Using SSH client type: native
	I1002 07:49:47.199324  470112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1002 07:49:47.199342  470112 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:49:47.330469  470112 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-297062
	
	I1002 07:49:47.330491  470112 ubuntu.go:182] provisioning hostname "force-systemd-env-297062"
	I1002 07:49:47.330563  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:47.353327  470112 main.go:141] libmachine: Using SSH client type: native
	I1002 07:49:47.353635  470112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1002 07:49:47.353655  470112 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-297062 && echo "force-systemd-env-297062" | sudo tee /etc/hostname
	I1002 07:49:47.493774  470112 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-297062
	
	I1002 07:49:47.493907  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:47.514172  470112 main.go:141] libmachine: Using SSH client type: native
	I1002 07:49:47.514495  470112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1002 07:49:47.514521  470112 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-297062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-297062/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-297062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:49:47.647607  470112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:49:47.647639  470112 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:49:47.647661  470112 ubuntu.go:190] setting up certificates
	I1002 07:49:47.647670  470112 provision.go:84] configureAuth start
	I1002 07:49:47.647739  470112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-297062
	I1002 07:49:47.665650  470112 provision.go:143] copyHostCerts
	I1002 07:49:47.665691  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:49:47.665735  470112 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:49:47.665745  470112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:49:47.665825  470112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:49:47.665908  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:49:47.665924  470112 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:49:47.665928  470112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:49:47.665954  470112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:49:47.666002  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:49:47.666018  470112 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:49:47.666022  470112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:49:47.666046  470112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:49:47.666099  470112 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-297062 san=[127.0.0.1 192.168.76.2 force-systemd-env-297062 localhost minikube]
	I1002 07:49:48.641042  470112 provision.go:177] copyRemoteCerts
	I1002 07:49:48.641123  470112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:49:48.641170  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:48.658519  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:48.758845  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:49:48.758907  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:49:48.776327  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:49:48.776436  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 07:49:48.794204  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:49:48.794268  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:49:48.811763  470112 provision.go:87] duration metric: took 1.164062127s to configureAuth
	I1002 07:49:48.811794  470112 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:49:48.811980  470112 config.go:182] Loaded profile config "force-systemd-env-297062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:49:48.812124  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:48.829004  470112 main.go:141] libmachine: Using SSH client type: native
	I1002 07:49:48.829314  470112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1002 07:49:48.829329  470112 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:49:49.070460  470112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:49:49.070487  470112 machine.go:96] duration metric: took 1.888920251s to provisionDockerMachine
	I1002 07:49:49.070499  470112 client.go:171] duration metric: took 9.316433395s to LocalClient.Create
	I1002 07:49:49.070524  470112 start.go:167] duration metric: took 9.31652064s to libmachine.API.Create "force-systemd-env-297062"
	I1002 07:49:49.070538  470112 start.go:293] postStartSetup for "force-systemd-env-297062" (driver="docker")
	I1002 07:49:49.070554  470112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:49:49.070657  470112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:49:49.070705  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:49.089184  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:49.187289  470112 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:49:49.190658  470112 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:49:49.190691  470112 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:49:49.190703  470112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:49:49.190763  470112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:49:49.190858  470112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:49:49.190870  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:49:49.190976  470112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:49:49.198678  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:49:49.216672  470112 start.go:296] duration metric: took 146.111882ms for postStartSetup
	I1002 07:49:49.217112  470112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-297062
	I1002 07:49:49.234066  470112 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/config.json ...
	I1002 07:49:49.234390  470112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:49:49.234445  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:49.251200  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:49.344464  470112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:49:49.349611  470112 start.go:128] duration metric: took 9.599269105s to createHost
	I1002 07:49:49.349633  470112 start.go:83] releasing machines lock for "force-systemd-env-297062", held for 9.59939929s
	I1002 07:49:49.349707  470112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-297062
	I1002 07:49:49.366678  470112 ssh_runner.go:195] Run: cat /version.json
	I1002 07:49:49.366736  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:49.366987  470112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:49:49.367051  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:49.389294  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:49.395904  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:49.578251  470112 ssh_runner.go:195] Run: systemctl --version
	I1002 07:49:49.584735  470112 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:49:49.621278  470112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:49:49.625616  470112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:49:49.625728  470112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:49:49.653532  470112 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 07:49:49.653560  470112 start.go:495] detecting cgroup driver to use...
	I1002 07:49:49.653578  470112 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1002 07:49:49.653634  470112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:49:49.670631  470112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:49:49.684338  470112 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:49:49.684409  470112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:49:49.703453  470112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:49:49.722726  470112 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:49:49.832345  470112 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:49:49.968034  470112 docker.go:234] disabling docker service ...
	I1002 07:49:49.968111  470112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:49:49.991849  470112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:49:50.014003  470112 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:49:50.145119  470112 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:49:50.268445  470112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:49:50.283394  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:49:50.297837  470112 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:49:50.297953  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.308170  470112 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:49:50.308260  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.317348  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.326770  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.336233  470112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:49:50.344834  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.353726  470112 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.367565  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.376576  470112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:49:50.384738  470112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:49:50.392507  470112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:49:50.496603  470112 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:49:50.628622  470112 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:49:50.628739  470112 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:49:50.633395  470112 start.go:563] Will wait 60s for crictl version
	I1002 07:49:50.633517  470112 ssh_runner.go:195] Run: which crictl
	I1002 07:49:50.637606  470112 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:49:50.681431  470112 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:49:50.681579  470112 ssh_runner.go:195] Run: crio --version
	I1002 07:49:50.714375  470112 ssh_runner.go:195] Run: crio --version
	I1002 07:49:50.748641  470112 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:49:50.751562  470112 cli_runner.go:164] Run: docker network inspect force-systemd-env-297062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:49:50.767806  470112 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 07:49:50.771732  470112 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:49:50.781704  470112 kubeadm.go:883] updating cluster {Name:force-systemd-env-297062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:49:50.781817  470112 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:49:50.781881  470112 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:49:50.815306  470112 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:49:50.815333  470112 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:49:50.815391  470112 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:49:50.840607  470112 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:49:50.840633  470112 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:49:50.840641  470112 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 07:49:50.840727  470112 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-297062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:49:50.840818  470112 ssh_runner.go:195] Run: crio config
	I1002 07:49:50.904999  470112 cni.go:84] Creating CNI manager for ""
	I1002 07:49:50.905036  470112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:49:50.905058  470112 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:49:50.905081  470112 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-297062 NodeName:force-systemd-env-297062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:49:50.905213  470112 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-297062"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:49:50.905298  470112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:49:50.913374  470112 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:49:50.913497  470112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:49:50.921404  470112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1002 07:49:50.935032  470112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:49:50.949244  470112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1002 07:49:50.962595  470112 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:49:50.966213  470112 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:49:50.976654  470112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:49:51.091723  470112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:49:51.108613  470112 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062 for IP: 192.168.76.2
	I1002 07:49:51.108636  470112 certs.go:195] generating shared ca certs ...
	I1002 07:49:51.108654  470112 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:51.108808  470112 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:49:51.108864  470112 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:49:51.108877  470112 certs.go:257] generating profile certs ...
	I1002 07:49:51.108936  470112 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.key
	I1002 07:49:51.108964  470112 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.crt with IP's: []
	I1002 07:49:52.134890  470112 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.crt ...
	I1002 07:49:52.134925  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.crt: {Name:mk29d9ba4e76056105d441d180e740bc509adc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.135134  470112 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.key ...
	I1002 07:49:52.135152  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.key: {Name:mk4df53639a56d3aac7cc9ac26d47f2cbe1ff198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.135243  470112 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key.0baec4d2
	I1002 07:49:52.135267  470112 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt.0baec4d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 07:49:52.421027  470112 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt.0baec4d2 ...
	I1002 07:49:52.421060  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt.0baec4d2: {Name:mk38570c71bb432c36b04e2f58d5b43494ac89bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.421249  470112 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key.0baec4d2 ...
	I1002 07:49:52.421265  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key.0baec4d2: {Name:mk9686d7d4802491ec42de5ac6c50b5bba9ebd4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.421349  470112 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt.0baec4d2 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt
	I1002 07:49:52.421439  470112 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key.0baec4d2 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key
	I1002 07:49:52.421513  470112 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key
	I1002 07:49:52.421538  470112 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt with IP's: []
	I1002 07:49:52.786161  470112 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt ...
	I1002 07:49:52.786195  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt: {Name:mk5a4dd77b6270a83f8831a7a691fa6b2fd1eb40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.786381  470112 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key ...
	I1002 07:49:52.786397  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key: {Name:mk4e9848f80c8c4449b73097b339b92cda5f774a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.786485  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:49:52.786509  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:49:52.786524  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:49:52.786543  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:49:52.786556  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:49:52.786575  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:49:52.786595  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:49:52.786607  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:49:52.786673  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:49:52.786710  470112 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:49:52.786724  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:49:52.786752  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:49:52.786780  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:49:52.786812  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:49:52.786857  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:49:52.786889  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:49:52.786906  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:49:52.786922  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:49:52.787489  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:49:52.805783  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:49:52.824813  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:49:52.843143  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:49:52.861776  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 07:49:52.879951  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:49:52.898361  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:49:52.917086  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:49:52.935565  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:49:52.954277  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:49:52.972402  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:49:52.990327  470112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:49:53.015169  470112 ssh_runner.go:195] Run: openssl version
	I1002 07:49:53.022292  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:49:53.031424  470112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:49:53.035492  470112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:49:53.035609  470112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:49:53.077036  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:49:53.085631  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:49:53.094333  470112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:49:53.099481  470112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:49:53.099554  470112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:49:53.140692  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:49:53.149125  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:49:53.157352  470112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:49:53.161106  470112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:49:53.161172  470112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:49:53.202141  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:49:53.210624  470112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:49:53.214645  470112 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 07:49:53.214703  470112 kubeadm.go:400] StartCluster: {Name:force-systemd-env-297062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:49:53.214778  470112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:49:53.214843  470112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:49:53.242500  470112 cri.go:89] found id: ""
	I1002 07:49:53.242579  470112 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:49:53.250640  470112 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:49:53.258658  470112 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:49:53.258753  470112 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:49:53.266758  470112 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:49:53.266827  470112 kubeadm.go:157] found existing configuration files:
	
	I1002 07:49:53.266891  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:49:53.275033  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:49:53.275135  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:49:53.282720  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:49:53.290814  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:49:53.290880  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:49:53.298578  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:49:53.307130  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:49:53.307198  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:49:53.314840  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:49:53.322444  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:49:53.322520  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:49:53.329847  470112 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:49:53.372619  470112 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:49:53.373070  470112 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:49:53.403280  470112 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:49:53.403358  470112 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:49:53.403399  470112 kubeadm.go:318] OS: Linux
	I1002 07:49:53.403464  470112 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:49:53.403517  470112 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:49:53.403568  470112 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:49:53.403618  470112 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:49:53.403681  470112 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:49:53.403734  470112 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:49:53.403782  470112 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:49:53.403834  470112 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:49:53.403883  470112 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:49:53.485883  470112 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:49:53.486010  470112 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:49:53.486112  470112 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:49:53.495848  470112 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:49:53.502218  470112 out.go:252]   - Generating certificates and keys ...
	I1002 07:49:53.502335  470112 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:49:53.502447  470112 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:49:54.411234  470112 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 07:49:54.592516  470112 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 07:49:55.229707  470112 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 07:49:56.010053  470112 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 07:49:56.764272  470112 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 07:49:56.764798  470112 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 07:49:56.973119  470112 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 07:49:56.973286  470112 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 07:49:57.702250  470112 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 07:49:58.164873  470112 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 07:50:01.202730  470112 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 07:50:01.203168  470112 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:50:01.700314  470112 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:50:02.042814  470112 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:50:02.321878  470112 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:50:02.469262  470112 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:50:02.859374  470112 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:50:02.860037  470112 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:50:02.862766  470112 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:50:02.867276  470112 out.go:252]   - Booting up control plane ...
	I1002 07:50:02.867398  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:50:02.867493  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:50:02.867570  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:50:02.883942  470112 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:50:02.884059  470112 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:50:02.891936  470112 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:50:02.892441  470112 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:50:02.892700  470112 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:50:03.030988  470112 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:50:03.031143  470112 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:50:04.032676  470112 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001816124s
	I1002 07:50:04.036642  470112 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:50:04.036743  470112 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 07:50:04.036843  470112 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:50:04.036933  470112 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:52:08.784474  463695 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000008706s
	I1002 07:52:08.785844  463695 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00046466s
	I1002 07:52:08.785944  463695 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000279945s
	I1002 07:52:08.785951  463695 kubeadm.go:318] 
	I1002 07:52:08.786046  463695 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:52:08.786132  463695 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:52:08.786237  463695 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:52:08.786336  463695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:52:08.786414  463695 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:52:08.786505  463695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:52:08.786512  463695 kubeadm.go:318] 
	I1002 07:52:08.789061  463695 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:52:08.789401  463695 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:52:08.789537  463695 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:52:08.790288  463695 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 07:52:08.790393  463695 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 07:52:08.790564  463695 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-275910 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-275910 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001753356s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000008706s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00046466s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000279945s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 07:52:08.790649  463695 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 07:52:09.322541  463695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:52:09.336491  463695 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:52:09.336560  463695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:52:09.344817  463695 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:52:09.344841  463695 kubeadm.go:157] found existing configuration files:
	
	I1002 07:52:09.344902  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:52:09.352743  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:52:09.352812  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:52:09.360741  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:52:09.369029  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:52:09.369101  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:52:09.376837  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:52:09.384770  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:52:09.384842  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:52:09.392659  463695 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:52:09.401254  463695 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:52:09.401322  463695 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:52:09.409244  463695 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:52:09.475761  463695 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:52:09.476021  463695 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:52:09.547346  463695 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:54:04.037698  470112 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000310703s
	I1002 07:54:04.037894  470112 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000925822s
	I1002 07:54:04.038283  470112 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00170073s
	I1002 07:54:04.038309  470112 kubeadm.go:318] 
	I1002 07:54:04.038408  470112 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:54:04.038518  470112 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:54:04.038619  470112 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:54:04.038722  470112 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:54:04.038803  470112 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:54:04.038889  470112 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:54:04.038898  470112 kubeadm.go:318] 
	I1002 07:54:04.043779  470112 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:54:04.044029  470112 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:54:04.044149  470112 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:54:04.044737  470112 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:54:04.044814  470112 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 07:54:04.044953  470112 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001816124s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000310703s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000925822s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00170073s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 07:54:04.045055  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 07:54:04.601819  470112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:54:04.615822  470112 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:54:04.615891  470112 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:54:04.624105  470112 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:54:04.624126  470112 kubeadm.go:157] found existing configuration files:
	
	I1002 07:54:04.624181  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:54:04.632232  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:54:04.632332  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:54:04.640516  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:54:04.648803  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:54:04.648871  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:54:04.656919  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:54:04.665159  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:54:04.665224  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:54:04.672807  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:54:04.680749  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:54:04.680812  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:54:04.688578  470112 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:54:04.729067  470112 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:54:04.729348  470112 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:54:04.752417  470112 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:54:04.752487  470112 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:54:04.752523  470112 kubeadm.go:318] OS: Linux
	I1002 07:54:04.752569  470112 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:54:04.752617  470112 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:54:04.752665  470112 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:54:04.752721  470112 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:54:04.752770  470112 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:54:04.752818  470112 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:54:04.752863  470112 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:54:04.752911  470112 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:54:04.752957  470112 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:54:04.824344  470112 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:54:04.824513  470112 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:54:04.824628  470112 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:54:04.839497  470112 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:54:04.844572  470112 out.go:252]   - Generating certificates and keys ...
	I1002 07:54:04.844750  470112 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:54:04.844873  470112 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:54:04.845003  470112 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 07:54:04.845104  470112 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 07:54:04.845215  470112 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 07:54:04.845316  470112 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 07:54:04.845418  470112 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 07:54:04.845518  470112 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 07:54:04.845636  470112 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 07:54:04.845750  470112 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 07:54:04.845815  470112 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 07:54:04.845910  470112 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:54:04.947844  470112 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:54:05.333970  470112 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:54:05.540423  470112 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:54:06.436511  470112 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:54:07.220464  470112 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:54:07.221100  470112 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:54:07.223758  470112 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:54:07.227275  470112 out.go:252]   - Booting up control plane ...
	I1002 07:54:07.227373  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:54:07.227451  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:54:07.227518  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:54:07.242215  470112 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:54:07.242535  470112 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:54:07.251850  470112 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:54:07.251962  470112 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:54:07.252549  470112 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:54:07.399599  470112 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:54:07.399734  470112 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:54:08.398659  470112 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00400494s
	I1002 07:54:08.403317  470112 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:54:08.403681  470112 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 07:54:08.404503  470112 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:54:08.404816  470112 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:56:13.694350  463695 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:56:13.694521  463695 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:56:13.699421  463695 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:56:13.699489  463695 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:56:13.699595  463695 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:56:13.699662  463695 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:56:13.699704  463695 kubeadm.go:318] OS: Linux
	I1002 07:56:13.699759  463695 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:56:13.699816  463695 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:56:13.699873  463695 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:56:13.699931  463695 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:56:13.699989  463695 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:56:13.700048  463695 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:56:13.700110  463695 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:56:13.700169  463695 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:56:13.700224  463695 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:56:13.700308  463695 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:56:13.700417  463695 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:56:13.700525  463695 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:56:13.700600  463695 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:56:13.704682  463695 out.go:252]   - Generating certificates and keys ...
	I1002 07:56:13.704797  463695 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:56:13.704877  463695 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:56:13.704970  463695 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 07:56:13.705065  463695 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 07:56:13.705180  463695 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 07:56:13.705255  463695 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 07:56:13.705332  463695 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 07:56:13.705418  463695 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 07:56:13.705505  463695 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 07:56:13.705604  463695 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 07:56:13.705664  463695 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 07:56:13.705733  463695 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:56:13.705792  463695 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:56:13.705862  463695 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:56:13.705952  463695 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:56:13.706024  463695 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:56:13.706086  463695 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:56:13.706187  463695 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:56:13.706263  463695 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:56:13.709121  463695 out.go:252]   - Booting up control plane ...
	I1002 07:56:13.709218  463695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:56:13.709309  463695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:56:13.709385  463695 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:56:13.709499  463695 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:56:13.709600  463695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:56:13.709713  463695 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:56:13.709805  463695 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:56:13.709849  463695 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:56:13.709990  463695 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:56:13.710103  463695 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:56:13.710168  463695 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00147554s
	I1002 07:56:13.710268  463695 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:56:13.710365  463695 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 07:56:13.710463  463695 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:56:13.710549  463695 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:56:13.710628  463695 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	I1002 07:56:13.710716  463695 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	I1002 07:56:13.710796  463695 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	I1002 07:56:13.710804  463695 kubeadm.go:318] 
	I1002 07:56:13.710906  463695 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:56:13.710996  463695 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:56:13.711098  463695 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:56:13.711204  463695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:56:13.711286  463695 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:56:13.711376  463695 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:56:13.711442  463695 kubeadm.go:402] duration metric: took 8m12.535733194s to StartCluster
	I1002 07:56:13.711493  463695 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:56:13.711566  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:56:13.711662  463695 kubeadm.go:318] 
	I1002 07:56:13.737176  463695 cri.go:89] found id: ""
	I1002 07:56:13.737212  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.737222  463695 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:56:13.737229  463695 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:56:13.737289  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:56:13.767318  463695 cri.go:89] found id: ""
	I1002 07:56:13.767352  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.767362  463695 logs.go:284] No container was found matching "etcd"
	I1002 07:56:13.767369  463695 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:56:13.767443  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:56:13.792397  463695 cri.go:89] found id: ""
	I1002 07:56:13.792423  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.792432  463695 logs.go:284] No container was found matching "coredns"
	I1002 07:56:13.792439  463695 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:56:13.792502  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:56:13.822490  463695 cri.go:89] found id: ""
	I1002 07:56:13.822515  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.822525  463695 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:56:13.822531  463695 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:56:13.822591  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:56:13.848792  463695 cri.go:89] found id: ""
	I1002 07:56:13.848824  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.848833  463695 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:56:13.848840  463695 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:56:13.848902  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:56:13.875585  463695 cri.go:89] found id: ""
	I1002 07:56:13.875610  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.875620  463695 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:56:13.875627  463695 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:56:13.875688  463695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:56:13.902744  463695 cri.go:89] found id: ""
	I1002 07:56:13.902781  463695 logs.go:282] 0 containers: []
	W1002 07:56:13.902791  463695 logs.go:284] No container was found matching "kindnet"
	I1002 07:56:13.902801  463695 logs.go:123] Gathering logs for kubelet ...
	I1002 07:56:13.902813  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:56:13.995776  463695 logs.go:123] Gathering logs for dmesg ...
	I1002 07:56:13.995815  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:56:14.016851  463695 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:56:14.016885  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:56:14.088197  463695 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:56:14.079636    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.080166    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.082121    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.082830    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.083971    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:56:14.079636    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.080166    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.082121    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.082830    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:14.083971    2359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:56:14.088222  463695 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:56:14.088237  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:56:14.172472  463695 logs.go:123] Gathering logs for container status ...
	I1002 07:56:14.172513  463695 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:56:14.201690  463695 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00147554s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:56:14.201752  463695 out.go:285] * 
	W1002 07:56:14.201842  463695 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00147554s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:56:14.201863  463695 out.go:285] * 
	W1002 07:56:14.204168  463695 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:56:14.209929  463695 out.go:203] 
	W1002 07:56:14.212957  463695 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00147554s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000251417s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000299065s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000766458s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:56:14.212992  463695 out.go:285] * 
	I1002 07:56:14.216283  463695 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:56:03 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:03.498774229Z" level=info msg="createCtr: removing container 6168372a15a2ca34d0d69adfebff413bfe1dff249f6f55e80ba78636a371f726" id=a56bcf71-876c-42b1-ae99-eace94edfb80 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:03 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:03.498809618Z" level=info msg="createCtr: deleting container 6168372a15a2ca34d0d69adfebff413bfe1dff249f6f55e80ba78636a371f726 from storage" id=a56bcf71-876c-42b1-ae99-eace94edfb80 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:03 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:03.504460746Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-flag-275910_kube-system_2e84221da472c50fcaf5536e39a687fb_0" id=a56bcf71-876c-42b1-ae99-eace94edfb80 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.464406076Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=024511d8-cda8-40b7-8f59-f31ed8899f6c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.465349601Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=7d993d3b-0023-48cb-aaca-bad47c89a6f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.466310807Z" level=info msg="Creating container: kube-system/kube-controller-manager-force-systemd-flag-275910/kube-controller-manager" id=f84d17a8-802a-45e4-9c32-126700e83c23 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.466643701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.471171299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.471814143Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.483442573Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f84d17a8-802a-45e4-9c32-126700e83c23 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.48457622Z" level=info msg="createCtr: deleting container ID df47e347377ba0be4cbb9d77319c9090473328504386c770680d1589abca5f41 from idIndex" id=f84d17a8-802a-45e4-9c32-126700e83c23 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.484618075Z" level=info msg="createCtr: removing container df47e347377ba0be4cbb9d77319c9090473328504386c770680d1589abca5f41" id=f84d17a8-802a-45e4-9c32-126700e83c23 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.484651626Z" level=info msg="createCtr: deleting container df47e347377ba0be4cbb9d77319c9090473328504386c770680d1589abca5f41 from storage" id=f84d17a8-802a-45e4-9c32-126700e83c23 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:10 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:10.487350227Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-flag-275910_kube-system_ef0e32a9ca4b645721b5026dc9365c32_0" id=f84d17a8-802a-45e4-9c32-126700e83c23 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.463931795Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=6015ab92-eda9-4ec7-a302-d236791261d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.465418381Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=8e500e65-90c5-4bf1-a8ea-f0d0304efe1d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.46782285Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-flag-275910/kube-apiserver" id=baf0e895-33e9-4f9b-8d82-af0db6b7a2b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.468174714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.473294998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.473934651Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.488970594Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=baf0e895-33e9-4f9b-8d82-af0db6b7a2b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.490184956Z" level=info msg="createCtr: deleting container ID fc3a4a297be4fae6adf19c709eb0122b578fe9e6718ef4ba50d3091e0eaa53dd from idIndex" id=baf0e895-33e9-4f9b-8d82-af0db6b7a2b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.490311045Z" level=info msg="createCtr: removing container fc3a4a297be4fae6adf19c709eb0122b578fe9e6718ef4ba50d3091e0eaa53dd" id=baf0e895-33e9-4f9b-8d82-af0db6b7a2b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.490438833Z" level=info msg="createCtr: deleting container fc3a4a297be4fae6adf19c709eb0122b578fe9e6718ef4ba50d3091e0eaa53dd from storage" id=baf0e895-33e9-4f9b-8d82-af0db6b7a2b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:56:14 force-systemd-flag-275910 crio[838]: time="2025-10-02T07:56:14.494011214Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-flag-275910_kube-system_19e79f4582d163869e7fdd4afc161e99_0" id=baf0e895-33e9-4f9b-8d82-af0db6b7a2b0 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:56:15.552721    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:15.553123    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:15.554692    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:15.555024    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:56:15.556698    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:16] overlayfs: idmapped layers are currently not supported
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:30] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:31] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:56:15 up  2:38,  0 user,  load average: 0.04, 0.77, 1.54
	Linux force-systemd-flag-275910 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:56:03 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:03.523276    1781 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-275910\" not found"
	Oct 02 07:56:04 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:04.218323    1781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-275910.186a9d472eacd653  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-275910,UID:force-systemd-flag-275910,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-275910 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-275910,},FirstTimestamp:2025-10-02 07:52:13.492917843 +0000 UTC m=+0.806024385,LastTimestamp:2025-10-02 07:52:13.492917843 +0000 UTC m=+0.806024385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-275910,}"
	Oct 02 07:56:05 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:05.491604    1781 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.85.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 07:56:08 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:08.295469    1781 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.85.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dforce-systemd-flag-275910&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:10.093022    1781 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-flag-275910?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]: I1002 07:56:10.283968    1781 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-flag-275910"
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:10.284382    1781 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-flag-275910"
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:10.463917    1781 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-275910\" not found" node="force-systemd-flag-275910"
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:10.487662    1781 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]:  > podSandboxID="e5232b5fa4c8de17d2274c02186857b7c00b821c845734f7731e8a6821abdf21"
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:10.487763    1781 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-flag-275910_kube-system(ef0e32a9ca4b645721b5026dc9365c32): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]:  > logger="UnhandledError"
	Oct 02 07:56:10 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:10.487796    1781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-flag-275910" podUID="ef0e32a9ca4b645721b5026dc9365c32"
	Oct 02 07:56:13 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:13.523488    1781 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-275910\" not found"
	Oct 02 07:56:14 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:14.219518    1781 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-275910.186a9d472eacd653  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-275910,UID:force-systemd-flag-275910,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-275910 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-275910,},FirstTimestamp:2025-10-02 07:52:13.492917843 +0000 UTC m=+0.806024385,LastTimestamp:2025-10-02 07:52:13.492917843 +0000 UTC m=+0.806024385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-275910,}"
	Oct 02 07:56:14 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:14.463322    1781 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-275910\" not found" node="force-systemd-flag-275910"
	Oct 02 07:56:14 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:14.494487    1781 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:56:14 force-systemd-flag-275910 kubelet[1781]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:56:14 force-systemd-flag-275910 kubelet[1781]:  > podSandboxID="dda8d120a873e93e32bbed1ab25c930dda79a09b529f72e971d9f640eaa68d1b"
	Oct 02 07:56:14 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:14.494571    1781 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:56:14 force-systemd-flag-275910 kubelet[1781]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-flag-275910_kube-system(19e79f4582d163869e7fdd4afc161e99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:56:14 force-systemd-flag-275910 kubelet[1781]:  > logger="UnhandledError"
	Oct 02 07:56:14 force-systemd-flag-275910 kubelet[1781]: E1002 07:56:14.494603    1781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-flag-275910" podUID="19e79f4582d163869e7fdd4afc161e99"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-275910 -n force-systemd-flag-275910
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-275910 -n force-systemd-flag-275910: exit status 6 (332.518186ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:56:16.012453  474172 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-275910" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-275910" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-275910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-275910
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-275910: (1.933430584s)
--- FAIL: TestForceSystemdFlag (513.62s)

                                                
                                    
x
+
TestForceSystemdEnv (512.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-297062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1002 07:51:41.264444  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:54:28.911899  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:55:51.979169  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-297062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m29.441555166s)

                                                
                                                
-- stdout --
	* [force-systemd-env-297062] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-297062" primary control-plane node in "force-systemd-env-297062" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:49:39.523361  470112 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:49:39.523553  470112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:49:39.523581  470112 out.go:374] Setting ErrFile to fd 2...
	I1002 07:49:39.523601  470112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:49:39.524315  470112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:49:39.524811  470112 out.go:368] Setting JSON to false
	I1002 07:49:39.525688  470112 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9131,"bootTime":1759382249,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:49:39.525757  470112 start.go:140] virtualization:  
	I1002 07:49:39.529240  470112 out.go:179] * [force-systemd-env-297062] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:49:39.533057  470112 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:49:39.533180  470112 notify.go:220] Checking for updates...
	I1002 07:49:39.538878  470112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:49:39.541847  470112 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:49:39.544867  470112 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:49:39.547724  470112 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:49:39.550628  470112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1002 07:49:39.554067  470112 config.go:182] Loaded profile config "force-systemd-flag-275910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:49:39.554181  470112 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:49:39.586416  470112 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:49:39.586593  470112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:49:39.646129  470112 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:49:39.636902499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:49:39.646239  470112 docker.go:318] overlay module found
	I1002 07:49:39.649357  470112 out.go:179] * Using the docker driver based on user configuration
	I1002 07:49:39.652242  470112 start.go:304] selected driver: docker
	I1002 07:49:39.652261  470112 start.go:924] validating driver "docker" against <nil>
	I1002 07:49:39.652275  470112 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:49:39.653041  470112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:49:39.712792  470112 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:49:39.70329887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:49:39.712948  470112 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 07:49:39.713182  470112 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 07:49:39.716191  470112 out.go:179] * Using Docker driver with root privileges
	I1002 07:49:39.719064  470112 cni.go:84] Creating CNI manager for ""
	I1002 07:49:39.719196  470112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:49:39.719212  470112 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 07:49:39.719299  470112 start.go:348] cluster config:
	{Name:force-systemd-env-297062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:49:39.722439  470112 out.go:179] * Starting "force-systemd-env-297062" primary control-plane node in "force-systemd-env-297062" cluster
	I1002 07:49:39.725285  470112 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:49:39.728234  470112 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:49:39.731018  470112 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:49:39.731106  470112 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:49:39.731113  470112 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:49:39.731121  470112 cache.go:58] Caching tarball of preloaded images
	I1002 07:49:39.731226  470112 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:49:39.731236  470112 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:49:39.731340  470112 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/config.json ...
	I1002 07:49:39.731365  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/config.json: {Name:mk246686f2a17d8558e63ddf32e6455f3f8b7ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:39.750041  470112 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:49:39.750065  470112 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:49:39.750092  470112 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:49:39.750115  470112 start.go:360] acquireMachinesLock for force-systemd-env-297062: {Name:mka6346f4f34ee7d4de2b8343e2733b1f08800ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:49:39.750220  470112 start.go:364] duration metric: took 85.564µs to acquireMachinesLock for "force-systemd-env-297062"
	I1002 07:49:39.750261  470112 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-297062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:49:39.750326  470112 start.go:125] createHost starting for "" (driver="docker")
	I1002 07:49:39.753760  470112 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 07:49:39.754004  470112 start.go:159] libmachine.API.Create for "force-systemd-env-297062" (driver="docker")
	I1002 07:49:39.754054  470112 client.go:168] LocalClient.Create starting
	I1002 07:49:39.754126  470112 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 07:49:39.754165  470112 main.go:141] libmachine: Decoding PEM data...
	I1002 07:49:39.754186  470112 main.go:141] libmachine: Parsing certificate...
	I1002 07:49:39.754251  470112 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 07:49:39.754280  470112 main.go:141] libmachine: Decoding PEM data...
	I1002 07:49:39.754293  470112 main.go:141] libmachine: Parsing certificate...
	I1002 07:49:39.754700  470112 cli_runner.go:164] Run: docker network inspect force-systemd-env-297062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 07:49:39.771015  470112 cli_runner.go:211] docker network inspect force-systemd-env-297062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 07:49:39.771132  470112 network_create.go:284] running [docker network inspect force-systemd-env-297062] to gather additional debugging logs...
	I1002 07:49:39.771154  470112 cli_runner.go:164] Run: docker network inspect force-systemd-env-297062
	W1002 07:49:39.788401  470112 cli_runner.go:211] docker network inspect force-systemd-env-297062 returned with exit code 1
	I1002 07:49:39.788435  470112 network_create.go:287] error running [docker network inspect force-systemd-env-297062]: docker network inspect force-systemd-env-297062: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-297062 not found
	I1002 07:49:39.788449  470112 network_create.go:289] output of [docker network inspect force-systemd-env-297062]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-297062 not found
	
	** /stderr **
	I1002 07:49:39.788564  470112 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:49:39.805904  470112 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 07:49:39.806289  470112 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 07:49:39.806457  470112 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 07:49:39.806938  470112 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ccdb0}
	I1002 07:49:39.806964  470112 network_create.go:124] attempt to create docker network force-systemd-env-297062 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 07:49:39.807025  470112 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-297062 force-systemd-env-297062
	I1002 07:49:39.875512  470112 network_create.go:108] docker network force-systemd-env-297062 192.168.76.0/24 created
	I1002 07:49:39.875548  470112 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-297062" container
	I1002 07:49:39.875645  470112 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 07:49:39.892431  470112 cli_runner.go:164] Run: docker volume create force-systemd-env-297062 --label name.minikube.sigs.k8s.io=force-systemd-env-297062 --label created_by.minikube.sigs.k8s.io=true
	I1002 07:49:39.909585  470112 oci.go:103] Successfully created a docker volume force-systemd-env-297062
	I1002 07:49:39.909689  470112 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-297062-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-297062 --entrypoint /usr/bin/test -v force-systemd-env-297062:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 07:49:40.494652  470112 oci.go:107] Successfully prepared a docker volume force-systemd-env-297062
	I1002 07:49:40.494706  470112 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:49:40.494726  470112 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 07:49:40.494814  470112 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-297062:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 07:49:44.938652  470112 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-297062:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.443780464s)
	I1002 07:49:44.938685  470112 kic.go:203] duration metric: took 4.443955277s to extract preloaded images to volume ...
	W1002 07:49:44.938830  470112 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 07:49:44.938933  470112 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 07:49:44.996025  470112 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-297062 --name force-systemd-env-297062 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-297062 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-297062 --network force-systemd-env-297062 --ip 192.168.76.2 --volume force-systemd-env-297062:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 07:49:45.529031  470112 cli_runner.go:164] Run: docker container inspect force-systemd-env-297062 --format={{.State.Running}}
	I1002 07:49:45.556780  470112 cli_runner.go:164] Run: docker container inspect force-systemd-env-297062 --format={{.State.Status}}
	I1002 07:49:45.584165  470112 cli_runner.go:164] Run: docker exec force-systemd-env-297062 stat /var/lib/dpkg/alternatives/iptables
	I1002 07:49:45.636911  470112 oci.go:144] the created container "force-systemd-env-297062" has a running status.
	I1002 07:49:45.636946  470112 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa...
	I1002 07:49:47.077898  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 07:49:47.077949  470112 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 07:49:47.097082  470112 cli_runner.go:164] Run: docker container inspect force-systemd-env-297062 --format={{.State.Status}}
	I1002 07:49:47.122236  470112 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 07:49:47.122290  470112 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-297062 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 07:49:47.165064  470112 cli_runner.go:164] Run: docker container inspect force-systemd-env-297062 --format={{.State.Status}}
	I1002 07:49:47.181545  470112 machine.go:93] provisionDockerMachine start ...
	I1002 07:49:47.181659  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:47.198951  470112 main.go:141] libmachine: Using SSH client type: native
	I1002 07:49:47.199324  470112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1002 07:49:47.199342  470112 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:49:47.330469  470112 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-297062
	
	I1002 07:49:47.330491  470112 ubuntu.go:182] provisioning hostname "force-systemd-env-297062"
	I1002 07:49:47.330563  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:47.353327  470112 main.go:141] libmachine: Using SSH client type: native
	I1002 07:49:47.353635  470112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1002 07:49:47.353655  470112 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-297062 && echo "force-systemd-env-297062" | sudo tee /etc/hostname
	I1002 07:49:47.493774  470112 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-297062
	
	I1002 07:49:47.493907  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:47.514172  470112 main.go:141] libmachine: Using SSH client type: native
	I1002 07:49:47.514495  470112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1002 07:49:47.514521  470112 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-297062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-297062/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-297062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:49:47.647607  470112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:49:47.647639  470112 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:49:47.647661  470112 ubuntu.go:190] setting up certificates
	I1002 07:49:47.647670  470112 provision.go:84] configureAuth start
	I1002 07:49:47.647739  470112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-297062
	I1002 07:49:47.665650  470112 provision.go:143] copyHostCerts
	I1002 07:49:47.665691  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:49:47.665735  470112 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:49:47.665745  470112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:49:47.665825  470112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:49:47.665908  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:49:47.665924  470112 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:49:47.665928  470112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:49:47.665954  470112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:49:47.666002  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:49:47.666018  470112 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:49:47.666022  470112 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:49:47.666046  470112 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:49:47.666099  470112 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-297062 san=[127.0.0.1 192.168.76.2 force-systemd-env-297062 localhost minikube]
	I1002 07:49:48.641042  470112 provision.go:177] copyRemoteCerts
	I1002 07:49:48.641123  470112 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:49:48.641170  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:48.658519  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:48.758845  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:49:48.758907  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:49:48.776327  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:49:48.776436  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 07:49:48.794204  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:49:48.794268  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:49:48.811763  470112 provision.go:87] duration metric: took 1.164062127s to configureAuth
	I1002 07:49:48.811794  470112 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:49:48.811980  470112 config.go:182] Loaded profile config "force-systemd-env-297062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:49:48.812124  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:48.829004  470112 main.go:141] libmachine: Using SSH client type: native
	I1002 07:49:48.829314  470112 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1002 07:49:48.829329  470112 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:49:49.070460  470112 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:49:49.070487  470112 machine.go:96] duration metric: took 1.888920251s to provisionDockerMachine
	I1002 07:49:49.070499  470112 client.go:171] duration metric: took 9.316433395s to LocalClient.Create
	I1002 07:49:49.070524  470112 start.go:167] duration metric: took 9.31652064s to libmachine.API.Create "force-systemd-env-297062"
	I1002 07:49:49.070538  470112 start.go:293] postStartSetup for "force-systemd-env-297062" (driver="docker")
	I1002 07:49:49.070554  470112 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:49:49.070657  470112 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:49:49.070705  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:49.089184  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:49.187289  470112 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:49:49.190658  470112 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:49:49.190691  470112 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:49:49.190703  470112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:49:49.190763  470112 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:49:49.190858  470112 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:49:49.190870  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:49:49.190976  470112 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:49:49.198678  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:49:49.216672  470112 start.go:296] duration metric: took 146.111882ms for postStartSetup
	I1002 07:49:49.217112  470112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-297062
	I1002 07:49:49.234066  470112 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/config.json ...
	I1002 07:49:49.234390  470112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:49:49.234445  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:49.251200  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:49.344464  470112 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:49:49.349611  470112 start.go:128] duration metric: took 9.599269105s to createHost
	I1002 07:49:49.349633  470112 start.go:83] releasing machines lock for "force-systemd-env-297062", held for 9.59939929s
	I1002 07:49:49.349707  470112 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-297062
	I1002 07:49:49.366678  470112 ssh_runner.go:195] Run: cat /version.json
	I1002 07:49:49.366736  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:49.366987  470112 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:49:49.367051  470112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-297062
	I1002 07:49:49.389294  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:49.395904  470112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/force-systemd-env-297062/id_rsa Username:docker}
	I1002 07:49:49.578251  470112 ssh_runner.go:195] Run: systemctl --version
	I1002 07:49:49.584735  470112 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:49:49.621278  470112 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:49:49.625616  470112 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:49:49.625728  470112 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:49:49.653532  470112 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 07:49:49.653560  470112 start.go:495] detecting cgroup driver to use...
	I1002 07:49:49.653578  470112 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1002 07:49:49.653634  470112 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:49:49.670631  470112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:49:49.684338  470112 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:49:49.684409  470112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:49:49.703453  470112 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:49:49.722726  470112 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:49:49.832345  470112 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:49:49.968034  470112 docker.go:234] disabling docker service ...
	I1002 07:49:49.968111  470112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:49:49.991849  470112 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:49:50.014003  470112 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:49:50.145119  470112 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:49:50.268445  470112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:49:50.283394  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:49:50.297837  470112 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:49:50.297953  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.308170  470112 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 07:49:50.308260  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.317348  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.326770  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.336233  470112 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:49:50.344834  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.353726  470112 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.367565  470112 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:49:50.376576  470112 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:49:50.384738  470112 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:49:50.392507  470112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:49:50.496603  470112 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:49:50.628622  470112 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:49:50.628739  470112 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:49:50.633395  470112 start.go:563] Will wait 60s for crictl version
	I1002 07:49:50.633517  470112 ssh_runner.go:195] Run: which crictl
	I1002 07:49:50.637606  470112 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:49:50.681431  470112 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:49:50.681579  470112 ssh_runner.go:195] Run: crio --version
	I1002 07:49:50.714375  470112 ssh_runner.go:195] Run: crio --version
	I1002 07:49:50.748641  470112 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:49:50.751562  470112 cli_runner.go:164] Run: docker network inspect force-systemd-env-297062 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:49:50.767806  470112 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 07:49:50.771732  470112 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:49:50.781704  470112 kubeadm.go:883] updating cluster {Name:force-systemd-env-297062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:49:50.781817  470112 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:49:50.781881  470112 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:49:50.815306  470112 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:49:50.815333  470112 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:49:50.815391  470112 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:49:50.840607  470112 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:49:50.840633  470112 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:49:50.840641  470112 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 07:49:50.840727  470112 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-297062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:49:50.840818  470112 ssh_runner.go:195] Run: crio config
	I1002 07:49:50.904999  470112 cni.go:84] Creating CNI manager for ""
	I1002 07:49:50.905036  470112 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:49:50.905058  470112 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:49:50.905081  470112 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-297062 NodeName:force-systemd-env-297062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:49:50.905213  470112 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-297062"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:49:50.905298  470112 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:49:50.913374  470112 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:49:50.913497  470112 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:49:50.921404  470112 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1002 07:49:50.935032  470112 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:49:50.949244  470112 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1002 07:49:50.962595  470112 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:49:50.966213  470112 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:49:50.976654  470112 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:49:51.091723  470112 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:49:51.108613  470112 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062 for IP: 192.168.76.2
	I1002 07:49:51.108636  470112 certs.go:195] generating shared ca certs ...
	I1002 07:49:51.108654  470112 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:51.108808  470112 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:49:51.108864  470112 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:49:51.108877  470112 certs.go:257] generating profile certs ...
	I1002 07:49:51.108936  470112 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.key
	I1002 07:49:51.108964  470112 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.crt with IP's: []
	I1002 07:49:52.134890  470112 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.crt ...
	I1002 07:49:52.134925  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.crt: {Name:mk29d9ba4e76056105d441d180e740bc509adc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.135134  470112 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.key ...
	I1002 07:49:52.135152  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/client.key: {Name:mk4df53639a56d3aac7cc9ac26d47f2cbe1ff198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.135243  470112 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key.0baec4d2
	I1002 07:49:52.135267  470112 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt.0baec4d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 07:49:52.421027  470112 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt.0baec4d2 ...
	I1002 07:49:52.421060  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt.0baec4d2: {Name:mk38570c71bb432c36b04e2f58d5b43494ac89bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.421249  470112 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key.0baec4d2 ...
	I1002 07:49:52.421265  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key.0baec4d2: {Name:mk9686d7d4802491ec42de5ac6c50b5bba9ebd4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.421349  470112 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt.0baec4d2 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt
	I1002 07:49:52.421439  470112 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key.0baec4d2 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key
	I1002 07:49:52.421513  470112 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key
	I1002 07:49:52.421538  470112 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt with IP's: []
	I1002 07:49:52.786161  470112 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt ...
	I1002 07:49:52.786195  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt: {Name:mk5a4dd77b6270a83f8831a7a691fa6b2fd1eb40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.786381  470112 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key ...
	I1002 07:49:52.786397  470112 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key: {Name:mk4e9848f80c8c4449b73097b339b92cda5f774a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:49:52.786485  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:49:52.786509  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:49:52.786524  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:49:52.786543  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:49:52.786556  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:49:52.786575  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:49:52.786595  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:49:52.786607  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:49:52.786673  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:49:52.786710  470112 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:49:52.786724  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:49:52.786752  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:49:52.786780  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:49:52.786812  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:49:52.786857  470112 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:49:52.786889  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:49:52.786906  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:49:52.786922  470112 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:49:52.787489  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:49:52.805783  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:49:52.824813  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:49:52.843143  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:49:52.861776  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 07:49:52.879951  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:49:52.898361  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:49:52.917086  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/force-systemd-env-297062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:49:52.935565  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:49:52.954277  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:49:52.972402  470112 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:49:52.990327  470112 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:49:53.015169  470112 ssh_runner.go:195] Run: openssl version
	I1002 07:49:53.022292  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:49:53.031424  470112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:49:53.035492  470112 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:49:53.035609  470112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:49:53.077036  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:49:53.085631  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:49:53.094333  470112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:49:53.099481  470112 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:49:53.099554  470112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:49:53.140692  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:49:53.149125  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:49:53.157352  470112 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:49:53.161106  470112 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:49:53.161172  470112 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:49:53.202141  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:49:53.210624  470112 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:49:53.214645  470112 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 07:49:53.214703  470112 kubeadm.go:400] StartCluster: {Name:force-systemd-env-297062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-297062 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:49:53.214778  470112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:49:53.214843  470112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:49:53.242500  470112 cri.go:89] found id: ""
	I1002 07:49:53.242579  470112 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:49:53.250640  470112 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:49:53.258658  470112 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:49:53.258753  470112 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:49:53.266758  470112 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:49:53.266827  470112 kubeadm.go:157] found existing configuration files:
	
	I1002 07:49:53.266891  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:49:53.275033  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:49:53.275135  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:49:53.282720  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:49:53.290814  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:49:53.290880  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:49:53.298578  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:49:53.307130  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:49:53.307198  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:49:53.314840  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:49:53.322444  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:49:53.322520  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:49:53.329847  470112 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:49:53.372619  470112 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:49:53.373070  470112 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:49:53.403280  470112 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:49:53.403358  470112 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:49:53.403399  470112 kubeadm.go:318] OS: Linux
	I1002 07:49:53.403464  470112 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:49:53.403517  470112 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:49:53.403568  470112 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:49:53.403618  470112 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:49:53.403681  470112 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:49:53.403734  470112 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:49:53.403782  470112 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:49:53.403834  470112 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:49:53.403883  470112 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:49:53.485883  470112 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:49:53.486010  470112 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:49:53.486112  470112 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:49:53.495848  470112 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:49:53.502218  470112 out.go:252]   - Generating certificates and keys ...
	I1002 07:49:53.502335  470112 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:49:53.502447  470112 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:49:54.411234  470112 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 07:49:54.592516  470112 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 07:49:55.229707  470112 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 07:49:56.010053  470112 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 07:49:56.764272  470112 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 07:49:56.764798  470112 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 07:49:56.973119  470112 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 07:49:56.973286  470112 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 07:49:57.702250  470112 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 07:49:58.164873  470112 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 07:50:01.202730  470112 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 07:50:01.203168  470112 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:50:01.700314  470112 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:50:02.042814  470112 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:50:02.321878  470112 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:50:02.469262  470112 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:50:02.859374  470112 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:50:02.860037  470112 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:50:02.862766  470112 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:50:02.867276  470112 out.go:252]   - Booting up control plane ...
	I1002 07:50:02.867398  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:50:02.867493  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:50:02.867570  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:50:02.883942  470112 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:50:02.884059  470112 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:50:02.891936  470112 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:50:02.892441  470112 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:50:02.892700  470112 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:50:03.030988  470112 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:50:03.031143  470112 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:50:04.032676  470112 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001816124s
	I1002 07:50:04.036642  470112 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:50:04.036743  470112 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 07:50:04.036843  470112 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:50:04.036933  470112 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:54:04.037698  470112 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000310703s
	I1002 07:54:04.037894  470112 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000925822s
	I1002 07:54:04.038283  470112 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00170073s
	I1002 07:54:04.038309  470112 kubeadm.go:318] 
	I1002 07:54:04.038408  470112 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:54:04.038518  470112 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:54:04.038619  470112 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:54:04.038722  470112 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:54:04.038803  470112 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:54:04.038889  470112 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:54:04.038898  470112 kubeadm.go:318] 
	I1002 07:54:04.043779  470112 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:54:04.044029  470112 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:54:04.044149  470112 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:54:04.044737  470112 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:54:04.044814  470112 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 07:54:04.044953  470112 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001816124s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000310703s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000925822s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00170073s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-297062 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001816124s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000310703s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000925822s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00170073s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 07:54:04.045055  470112 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 07:54:04.601819  470112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:54:04.615822  470112 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:54:04.615891  470112 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:54:04.624105  470112 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:54:04.624126  470112 kubeadm.go:157] found existing configuration files:
	
	I1002 07:54:04.624181  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:54:04.632232  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:54:04.632332  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:54:04.640516  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:54:04.648803  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:54:04.648871  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:54:04.656919  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:54:04.665159  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:54:04.665224  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:54:04.672807  470112 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:54:04.680749  470112 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:54:04.680812  470112 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:54:04.688578  470112 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:54:04.729067  470112 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:54:04.729348  470112 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:54:04.752417  470112 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:54:04.752487  470112 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:54:04.752523  470112 kubeadm.go:318] OS: Linux
	I1002 07:54:04.752569  470112 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:54:04.752617  470112 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:54:04.752665  470112 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:54:04.752721  470112 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:54:04.752770  470112 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:54:04.752818  470112 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:54:04.752863  470112 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:54:04.752911  470112 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:54:04.752957  470112 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:54:04.824344  470112 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:54:04.824513  470112 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:54:04.824628  470112 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:54:04.839497  470112 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:54:04.844572  470112 out.go:252]   - Generating certificates and keys ...
	I1002 07:54:04.844750  470112 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:54:04.844873  470112 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:54:04.845003  470112 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 07:54:04.845104  470112 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 07:54:04.845215  470112 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 07:54:04.845316  470112 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 07:54:04.845418  470112 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 07:54:04.845518  470112 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 07:54:04.845636  470112 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 07:54:04.845750  470112 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 07:54:04.845815  470112 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 07:54:04.845910  470112 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:54:04.947844  470112 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:54:05.333970  470112 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:54:05.540423  470112 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:54:06.436511  470112 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:54:07.220464  470112 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:54:07.221100  470112 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:54:07.223758  470112 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:54:07.227275  470112 out.go:252]   - Booting up control plane ...
	I1002 07:54:07.227373  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:54:07.227451  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:54:07.227518  470112 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:54:07.242215  470112 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:54:07.242535  470112 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:54:07.251850  470112 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:54:07.251962  470112 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:54:07.252549  470112 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:54:07.399599  470112 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:54:07.399734  470112 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:54:08.398659  470112 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00400494s
	I1002 07:54:08.403317  470112 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:54:08.403681  470112 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 07:54:08.404503  470112 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:54:08.404816  470112 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:58:08.404444  470112 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	I1002 07:58:08.405346  470112 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	I1002 07:58:08.405984  470112 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	I1002 07:58:08.406006  470112 kubeadm.go:318] 
	I1002 07:58:08.406103  470112 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:58:08.406195  470112 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:58:08.406297  470112 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:58:08.406404  470112 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:58:08.406486  470112 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:58:08.406581  470112 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:58:08.406590  470112 kubeadm.go:318] 
	I1002 07:58:08.411122  470112 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:58:08.411379  470112 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:58:08.411495  470112 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:58:08.412072  470112 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:58:08.412152  470112 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:58:08.412215  470112 kubeadm.go:402] duration metric: took 8m15.197517544s to StartCluster
	I1002 07:58:08.412276  470112 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:58:08.412345  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:58:08.442645  470112 cri.go:89] found id: ""
	I1002 07:58:08.442719  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.442744  470112 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:58:08.442783  470112 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:58:08.442859  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:58:08.469256  470112 cri.go:89] found id: ""
	I1002 07:58:08.469282  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.469291  470112 logs.go:284] No container was found matching "etcd"
	I1002 07:58:08.469337  470112 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:58:08.469414  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:58:08.495030  470112 cri.go:89] found id: ""
	I1002 07:58:08.495053  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.495061  470112 logs.go:284] No container was found matching "coredns"
	I1002 07:58:08.495067  470112 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:58:08.495189  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:58:08.520327  470112 cri.go:89] found id: ""
	I1002 07:58:08.520353  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.520362  470112 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:58:08.520369  470112 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:58:08.520428  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:58:08.546672  470112 cri.go:89] found id: ""
	I1002 07:58:08.546694  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.546703  470112 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:58:08.546709  470112 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:58:08.546775  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:58:08.573469  470112 cri.go:89] found id: ""
	I1002 07:58:08.573496  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.573505  470112 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:58:08.573512  470112 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:58:08.573569  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:58:08.600071  470112 cri.go:89] found id: ""
	I1002 07:58:08.600094  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.600103  470112 logs.go:284] No container was found matching "kindnet"
	I1002 07:58:08.600112  470112 logs.go:123] Gathering logs for kubelet ...
	I1002 07:58:08.600127  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:58:08.693517  470112 logs.go:123] Gathering logs for dmesg ...
	I1002 07:58:08.693554  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:58:08.709758  470112 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:58:08.709831  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:58:08.779349  470112 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:58:08.770058    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.770852    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.772399    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.772883    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.774392    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:58:08.770058    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.770852    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.772399    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.772883    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.774392    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:58:08.779412  470112 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:58:08.779440  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:58:08.857762  470112 logs.go:123] Gathering logs for container status ...
	I1002 07:58:08.857798  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:58:08.888962  470112 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00400494s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:58:08.889012  470112 out.go:285] * 
	* 
	W1002 07:58:08.889078  470112 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00400494s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00400494s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:58:08.889096  470112 out.go:285] * 
	* 
	W1002 07:58:08.891288  470112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:58:08.899113  470112 out.go:203] 
	W1002 07:58:08.902048  470112 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00400494s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00400494s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:58:08.902077  470112 out.go:285] * 
	* 
	I1002 07:58:08.905306  470112 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-297062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-02 07:58:08.966644866 +0000 UTC m=+4609.312924424
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-297062
helpers_test.go:243: (dbg) docker inspect force-systemd-env-297062:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b7d2bbfdadc7b289551b1fcb45819b3c7aedb6d3f1390f54b0d409f69d0aa88",
	        "Created": "2025-10-02T07:49:45.026250324Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470517,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:49:45.14586412Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/7b7d2bbfdadc7b289551b1fcb45819b3c7aedb6d3f1390f54b0d409f69d0aa88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b7d2bbfdadc7b289551b1fcb45819b3c7aedb6d3f1390f54b0d409f69d0aa88/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b7d2bbfdadc7b289551b1fcb45819b3c7aedb6d3f1390f54b0d409f69d0aa88/hosts",
	        "LogPath": "/var/lib/docker/containers/7b7d2bbfdadc7b289551b1fcb45819b3c7aedb6d3f1390f54b0d409f69d0aa88/7b7d2bbfdadc7b289551b1fcb45819b3c7aedb6d3f1390f54b0d409f69d0aa88-json.log",
	        "Name": "/force-systemd-env-297062",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-297062:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-297062",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b7d2bbfdadc7b289551b1fcb45819b3c7aedb6d3f1390f54b0d409f69d0aa88",
	                "LowerDir": "/var/lib/docker/overlay2/8b0440f217e7b08d2ff6830f6188412983bad2e603acb8742bf7be140fbf8372-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b0440f217e7b08d2ff6830f6188412983bad2e603acb8742bf7be140fbf8372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b0440f217e7b08d2ff6830f6188412983bad2e603acb8742bf7be140fbf8372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b0440f217e7b08d2ff6830f6188412983bad2e603acb8742bf7be140fbf8372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-297062",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-297062/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-297062",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-297062",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-297062",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0928dae6379582133ab781e626cd86ed9b6d1054f7f3a8b27417e292604af2e9",
	            "SandboxKey": "/var/run/docker/netns/0928dae63795",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-297062": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:4d:ae:76:2c:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fab39eaa8cd154c3ec82c62afd02563b8c119ae6f2ef545757a457dd910ea55",
	                    "EndpointID": "e94888a8f45cc97334ebc8b24bb882b7ef9dd08fcf4658e2287aaf7900874710",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-297062",
	                        "7b7d2bbfdadc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-297062 -n force-systemd-env-297062
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-297062 -n force-systemd-env-297062: exit status 6 (350.986803ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:58:09.342630  477309 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-297062" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-297062 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-810803 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status docker --all --full --no-pager                                      │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat docker --no-pager                                                      │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/docker/daemon.json                                                          │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo docker system info                                                                   │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cri-dockerd --version                                                                │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat containerd --no-pager                                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/containerd/config.toml                                                      │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo containerd config dump                                                               │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status crio --all --full --no-pager                                        │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat crio --no-pager                                                        │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo crio config                                                                          │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ delete  │ -p cilium-810803                                                                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │ 02 Oct 25 07:49 UTC │
	│ start   │ -p force-systemd-env-297062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ force-systemd-flag-275910 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-flag-275910                                                                               │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:56:18
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:56:18.017681  474543 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:56:18.018351  474543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:56:18.018357  474543 out.go:374] Setting ErrFile to fd 2...
	I1002 07:56:18.018360  474543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:56:18.018659  474543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:56:18.019213  474543 out.go:368] Setting JSON to false
	I1002 07:56:18.020133  474543 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9529,"bootTime":1759382249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:56:18.020197  474543 start.go:140] virtualization:  
	I1002 07:56:18.024145  474543 out.go:179] * [cert-expiration-759246] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:56:18.028793  474543 notify.go:220] Checking for updates...
	I1002 07:56:18.032068  474543 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:56:18.035774  474543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:56:18.039124  474543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:56:18.043158  474543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:56:18.046489  474543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:56:18.049547  474543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:56:18.053332  474543 config.go:182] Loaded profile config "force-systemd-env-297062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:56:18.053447  474543 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:56:18.082280  474543 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:56:18.082469  474543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:56:18.145214  474543 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:56:18.135275999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:56:18.145314  474543 docker.go:318] overlay module found
	I1002 07:56:18.148761  474543 out.go:179] * Using the docker driver based on user configuration
	I1002 07:56:18.151907  474543 start.go:304] selected driver: docker
	I1002 07:56:18.151918  474543 start.go:924] validating driver "docker" against <nil>
	I1002 07:56:18.151937  474543 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:56:18.152801  474543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:56:18.213161  474543 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:56:18.202836414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:56:18.213299  474543 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 07:56:18.213525  474543 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 07:56:18.216613  474543 out.go:179] * Using Docker driver with root privileges
	I1002 07:56:18.219776  474543 cni.go:84] Creating CNI manager for ""
	I1002 07:56:18.219838  474543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:56:18.219845  474543 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 07:56:18.219920  474543 start.go:348] cluster config:
	{Name:cert-expiration-759246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-759246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:56:18.224763  474543 out.go:179] * Starting "cert-expiration-759246" primary control-plane node in "cert-expiration-759246" cluster
	I1002 07:56:18.227845  474543 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:56:18.230602  474543 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:56:18.233327  474543 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:56:18.233374  474543 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:56:18.233382  474543 cache.go:58] Caching tarball of preloaded images
	I1002 07:56:18.233410  474543 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:56:18.233475  474543 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:56:18.233484  474543 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:56:18.233595  474543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/config.json ...
	I1002 07:56:18.233611  474543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/config.json: {Name:mkeb86a339e6b8ff8e13df1dace5ddb6204f0336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:18.252529  474543 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:56:18.252546  474543 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:56:18.252572  474543 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:56:18.252594  474543 start.go:360] acquireMachinesLock for cert-expiration-759246: {Name:mk9124d9c2087dfeb6c28c0c613ea0a41bf56f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:56:18.252707  474543 start.go:364] duration metric: took 92.39µs to acquireMachinesLock for "cert-expiration-759246"
	I1002 07:56:18.252732  474543 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-759246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-759246 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:56:18.252795  474543 start.go:125] createHost starting for "" (driver="docker")
	I1002 07:56:18.258169  474543 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 07:56:18.258409  474543 start.go:159] libmachine.API.Create for "cert-expiration-759246" (driver="docker")
	I1002 07:56:18.258455  474543 client.go:168] LocalClient.Create starting
	I1002 07:56:18.258548  474543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 07:56:18.258583  474543 main.go:141] libmachine: Decoding PEM data...
	I1002 07:56:18.258605  474543 main.go:141] libmachine: Parsing certificate...
	I1002 07:56:18.258659  474543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 07:56:18.258673  474543 main.go:141] libmachine: Decoding PEM data...
	I1002 07:56:18.258681  474543 main.go:141] libmachine: Parsing certificate...
	I1002 07:56:18.259067  474543 cli_runner.go:164] Run: docker network inspect cert-expiration-759246 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 07:56:18.274864  474543 cli_runner.go:211] docker network inspect cert-expiration-759246 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 07:56:18.274932  474543 network_create.go:284] running [docker network inspect cert-expiration-759246] to gather additional debugging logs...
	I1002 07:56:18.274946  474543 cli_runner.go:164] Run: docker network inspect cert-expiration-759246
	W1002 07:56:18.290588  474543 cli_runner.go:211] docker network inspect cert-expiration-759246 returned with exit code 1
	I1002 07:56:18.290607  474543 network_create.go:287] error running [docker network inspect cert-expiration-759246]: docker network inspect cert-expiration-759246: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-759246 not found
	I1002 07:56:18.290618  474543 network_create.go:289] output of [docker network inspect cert-expiration-759246]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-759246 not found
	
	** /stderr **
	I1002 07:56:18.290711  474543 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:56:18.307573  474543 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 07:56:18.307978  474543 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 07:56:18.308116  474543 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 07:56:18.308372  474543 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6fab39eaa8cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:5e:30:cd:ea:e6} reservation:<nil>}
	I1002 07:56:18.308772  474543 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a348a0}
	I1002 07:56:18.308787  474543 network_create.go:124] attempt to create docker network cert-expiration-759246 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 07:56:18.308839  474543 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-759246 cert-expiration-759246
	I1002 07:56:18.388461  474543 network_create.go:108] docker network cert-expiration-759246 192.168.85.0/24 created
	I1002 07:56:18.388482  474543 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-759246" container
	I1002 07:56:18.388554  474543 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 07:56:18.425951  474543 cli_runner.go:164] Run: docker volume create cert-expiration-759246 --label name.minikube.sigs.k8s.io=cert-expiration-759246 --label created_by.minikube.sigs.k8s.io=true
	I1002 07:56:18.448692  474543 oci.go:103] Successfully created a docker volume cert-expiration-759246
	I1002 07:56:18.448792  474543 cli_runner.go:164] Run: docker run --rm --name cert-expiration-759246-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-759246 --entrypoint /usr/bin/test -v cert-expiration-759246:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 07:56:19.015813  474543 oci.go:107] Successfully prepared a docker volume cert-expiration-759246
	I1002 07:56:19.015860  474543 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:56:19.015879  474543 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 07:56:19.015943  474543 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-759246:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 07:56:23.468290  474543 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-759246:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.452293194s)
	I1002 07:56:23.468330  474543 kic.go:203] duration metric: took 4.452449035s to extract preloaded images to volume ...
	W1002 07:56:23.468469  474543 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 07:56:23.468568  474543 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 07:56:23.528941  474543 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-759246 --name cert-expiration-759246 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-759246 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-759246 --network cert-expiration-759246 --ip 192.168.85.2 --volume cert-expiration-759246:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 07:56:23.838349  474543 cli_runner.go:164] Run: docker container inspect cert-expiration-759246 --format={{.State.Running}}
	I1002 07:56:23.862779  474543 cli_runner.go:164] Run: docker container inspect cert-expiration-759246 --format={{.State.Status}}
	I1002 07:56:23.889290  474543 cli_runner.go:164] Run: docker exec cert-expiration-759246 stat /var/lib/dpkg/alternatives/iptables
	I1002 07:56:23.940819  474543 oci.go:144] the created container "cert-expiration-759246" has a running status.
	I1002 07:56:23.940846  474543 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa...
	I1002 07:56:24.668177  474543 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 07:56:24.690841  474543 cli_runner.go:164] Run: docker container inspect cert-expiration-759246 --format={{.State.Status}}
	I1002 07:56:24.710076  474543 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 07:56:24.710093  474543 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-759246 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 07:56:24.753488  474543 cli_runner.go:164] Run: docker container inspect cert-expiration-759246 --format={{.State.Status}}
	I1002 07:56:24.772835  474543 machine.go:93] provisionDockerMachine start ...
	I1002 07:56:24.772924  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:24.791665  474543 main.go:141] libmachine: Using SSH client type: native
	I1002 07:56:24.791985  474543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1002 07:56:24.791992  474543 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:56:24.792553  474543 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42198->127.0.0.1:33388: read: connection reset by peer
	I1002 07:56:27.926977  474543 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-759246
	
	I1002 07:56:27.926992  474543 ubuntu.go:182] provisioning hostname "cert-expiration-759246"
	I1002 07:56:27.927070  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:27.945344  474543 main.go:141] libmachine: Using SSH client type: native
	I1002 07:56:27.945668  474543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1002 07:56:27.945679  474543 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-759246 && echo "cert-expiration-759246" | sudo tee /etc/hostname
	I1002 07:56:28.092869  474543 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-759246
	
	I1002 07:56:28.092956  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:28.113614  474543 main.go:141] libmachine: Using SSH client type: native
	I1002 07:56:28.113931  474543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1002 07:56:28.113946  474543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-759246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-759246/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-759246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:56:28.247385  474543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:56:28.247401  474543 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:56:28.247417  474543 ubuntu.go:190] setting up certificates
	I1002 07:56:28.247425  474543 provision.go:84] configureAuth start
	I1002 07:56:28.247496  474543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-759246
	I1002 07:56:28.264862  474543 provision.go:143] copyHostCerts
	I1002 07:56:28.264920  474543 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:56:28.264928  474543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:56:28.265009  474543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:56:28.265093  474543 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:56:28.265097  474543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:56:28.265119  474543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:56:28.265167  474543 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:56:28.265170  474543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:56:28.265190  474543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:56:28.265231  474543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-759246 san=[127.0.0.1 192.168.85.2 cert-expiration-759246 localhost minikube]
	I1002 07:56:28.659013  474543 provision.go:177] copyRemoteCerts
	I1002 07:56:28.659076  474543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:56:28.659137  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:28.679351  474543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa Username:docker}
	I1002 07:56:28.776071  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:56:28.793782  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 07:56:28.811949  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:56:28.830345  474543 provision.go:87] duration metric: took 582.905835ms to configureAuth
	I1002 07:56:28.830362  474543 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:56:28.830560  474543 config.go:182] Loaded profile config "cert-expiration-759246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:56:28.830661  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:28.856945  474543 main.go:141] libmachine: Using SSH client type: native
	I1002 07:56:28.857239  474543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1002 07:56:28.857250  474543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:56:29.092927  474543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:56:29.092941  474543 machine.go:96] duration metric: took 4.320094668s to provisionDockerMachine
	I1002 07:56:29.092950  474543 client.go:171] duration metric: took 10.834489443s to LocalClient.Create
	I1002 07:56:29.092975  474543 start.go:167] duration metric: took 10.83456827s to libmachine.API.Create "cert-expiration-759246"
	I1002 07:56:29.092981  474543 start.go:293] postStartSetup for "cert-expiration-759246" (driver="docker")
	I1002 07:56:29.092990  474543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:56:29.093065  474543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:56:29.093139  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:29.111323  474543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa Username:docker}
	I1002 07:56:29.207448  474543 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:56:29.210541  474543 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:56:29.210559  474543 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:56:29.210568  474543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:56:29.210621  474543 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:56:29.210699  474543 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:56:29.210802  474543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:56:29.218193  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:56:29.235345  474543 start.go:296] duration metric: took 142.349795ms for postStartSetup
	I1002 07:56:29.235702  474543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-759246
	I1002 07:56:29.252010  474543 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/config.json ...
	I1002 07:56:29.252275  474543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:56:29.252313  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:29.268284  474543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa Username:docker}
	I1002 07:56:29.360442  474543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:56:29.365294  474543 start.go:128] duration metric: took 11.112484905s to createHost
	I1002 07:56:29.365308  474543 start.go:83] releasing machines lock for "cert-expiration-759246", held for 11.112594059s
	I1002 07:56:29.365382  474543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-759246
	I1002 07:56:29.382100  474543 ssh_runner.go:195] Run: cat /version.json
	I1002 07:56:29.382143  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:29.382404  474543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:56:29.382451  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:29.406635  474543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa Username:docker}
	I1002 07:56:29.409780  474543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa Username:docker}
	I1002 07:56:29.498884  474543 ssh_runner.go:195] Run: systemctl --version
	I1002 07:56:29.589459  474543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:56:29.624446  474543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:56:29.628922  474543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:56:29.628990  474543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:56:29.657722  474543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 07:56:29.657736  474543 start.go:495] detecting cgroup driver to use...
	I1002 07:56:29.657768  474543 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:56:29.657816  474543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:56:29.675404  474543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:56:29.688860  474543 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:56:29.688922  474543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:56:29.706979  474543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:56:29.726960  474543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:56:29.847037  474543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:56:29.972105  474543 docker.go:234] disabling docker service ...
	I1002 07:56:29.972165  474543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:56:29.993460  474543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:56:30.031262  474543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:56:30.175675  474543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:56:30.299556  474543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:56:30.313075  474543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:56:30.328991  474543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:56:30.329046  474543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:56:30.339337  474543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:56:30.339399  474543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:56:30.349339  474543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:56:30.359240  474543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:56:30.372049  474543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:56:30.381801  474543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:56:30.392358  474543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:56:30.408267  474543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:56:30.417242  474543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:56:30.425053  474543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:56:30.432973  474543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:56:30.549989  474543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:56:30.678758  474543 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:56:30.678838  474543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:56:30.682851  474543 start.go:563] Will wait 60s for crictl version
	I1002 07:56:30.682905  474543 ssh_runner.go:195] Run: which crictl
	I1002 07:56:30.686595  474543 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:56:30.712041  474543 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:56:30.712139  474543 ssh_runner.go:195] Run: crio --version
	I1002 07:56:30.741713  474543 ssh_runner.go:195] Run: crio --version
	I1002 07:56:30.774937  474543 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:56:30.777965  474543 cli_runner.go:164] Run: docker network inspect cert-expiration-759246 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:56:30.794121  474543 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 07:56:30.798191  474543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:56:30.808146  474543 kubeadm.go:883] updating cluster {Name:cert-expiration-759246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-759246 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:56:30.808245  474543 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:56:30.808295  474543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:56:30.843997  474543 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:56:30.844009  474543 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:56:30.844070  474543 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:56:30.869622  474543 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:56:30.869633  474543 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:56:30.869640  474543 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 07:56:30.869720  474543 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-759246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-759246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:56:30.869794  474543 ssh_runner.go:195] Run: crio config
	I1002 07:56:30.930068  474543 cni.go:84] Creating CNI manager for ""
	I1002 07:56:30.930078  474543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:56:30.930095  474543 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:56:30.930126  474543 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-759246 NodeName:cert-expiration-759246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:56:30.930268  474543 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-759246"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:56:30.930357  474543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:56:30.938117  474543 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:56:30.938176  474543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:56:30.946256  474543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 07:56:30.959707  474543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:56:30.973186  474543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1002 07:56:30.986905  474543 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:56:30.991056  474543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:56:31.001420  474543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:56:31.120416  474543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:56:31.139218  474543 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246 for IP: 192.168.85.2
	I1002 07:56:31.139229  474543 certs.go:195] generating shared ca certs ...
	I1002 07:56:31.139248  474543 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:31.139405  474543 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:56:31.139447  474543 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:56:31.139453  474543 certs.go:257] generating profile certs ...
	I1002 07:56:31.139510  474543 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/client.key
	I1002 07:56:31.139520  474543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/client.crt with IP's: []
	I1002 07:56:31.540119  474543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/client.crt ...
	I1002 07:56:31.540137  474543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/client.crt: {Name:mkf506e6fad26528971f0f00e1e1df7517ed9d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:31.540343  474543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/client.key ...
	I1002 07:56:31.540353  474543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/client.key: {Name:mka09ce693b3a3fc25495967e5a451093c4aa3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:31.540447  474543 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.key.406b5424
	I1002 07:56:31.540460  474543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.crt.406b5424 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 07:56:32.717599  474543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.crt.406b5424 ...
	I1002 07:56:32.717615  474543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.crt.406b5424: {Name:mk0ad45c22532aabfb3b2fa2254b5e5a0e38fedd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:32.717812  474543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.key.406b5424 ...
	I1002 07:56:32.717822  474543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.key.406b5424: {Name:mk6a248fd96af69bbc18f1416d84970ca1f2574c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:32.717905  474543 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.crt.406b5424 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.crt
	I1002 07:56:32.717977  474543 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.key.406b5424 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.key
	I1002 07:56:32.718047  474543 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/proxy-client.key
	I1002 07:56:32.718057  474543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/proxy-client.crt with IP's: []
	I1002 07:56:33.101309  474543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/proxy-client.crt ...
	I1002 07:56:33.101326  474543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/proxy-client.crt: {Name:mka02df1a73f789c48467af1c006f9df77225bfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:33.101523  474543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/proxy-client.key ...
	I1002 07:56:33.101531  474543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/proxy-client.key: {Name:mkf8aad3b9d1dc50125a7f93852004b9466b2564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:33.101722  474543 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:56:33.101757  474543 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:56:33.101765  474543 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:56:33.101798  474543 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:56:33.101821  474543 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:56:33.101840  474543 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:56:33.101882  474543 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:56:33.102485  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:56:33.122101  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:56:33.141706  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:56:33.161999  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:56:33.181774  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 07:56:33.199652  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:56:33.217375  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:56:33.235187  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:56:33.252708  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:56:33.270846  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:56:33.288299  474543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:56:33.306074  474543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:56:33.318996  474543 ssh_runner.go:195] Run: openssl version
	I1002 07:56:33.325612  474543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:56:33.333799  474543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:56:33.337411  474543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:56:33.337469  474543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:56:33.378600  474543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:56:33.388757  474543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:56:33.402697  474543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:56:33.408535  474543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:56:33.408592  474543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:56:33.450522  474543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:56:33.462607  474543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:56:33.471298  474543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:56:33.475341  474543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:56:33.475404  474543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:56:33.517445  474543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:56:33.525940  474543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:56:33.529561  474543 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 07:56:33.529608  474543 kubeadm.go:400] StartCluster: {Name:cert-expiration-759246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-759246 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:56:33.529691  474543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:56:33.529750  474543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:56:33.557334  474543 cri.go:89] found id: ""
	I1002 07:56:33.557423  474543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:56:33.565685  474543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:56:33.573935  474543 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:56:33.574003  474543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:56:33.583097  474543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:56:33.583109  474543 kubeadm.go:157] found existing configuration files:
	
	I1002 07:56:33.583162  474543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:56:33.591351  474543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:56:33.591418  474543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:56:33.599148  474543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:56:33.607064  474543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:56:33.607146  474543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:56:33.614614  474543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:56:33.622405  474543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:56:33.622457  474543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:56:33.630480  474543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:56:33.638401  474543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:56:33.638483  474543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:56:33.646102  474543 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:56:33.690255  474543 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:56:33.690577  474543 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:56:33.714303  474543 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:56:33.714376  474543 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:56:33.714411  474543 kubeadm.go:318] OS: Linux
	I1002 07:56:33.714456  474543 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:56:33.714503  474543 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:56:33.714549  474543 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:56:33.714596  474543 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:56:33.714642  474543 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:56:33.714691  474543 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:56:33.714735  474543 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:56:33.714782  474543 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:56:33.714827  474543 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:56:33.784520  474543 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:56:33.784628  474543 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:56:33.784722  474543 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:56:33.795578  474543 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:56:33.801325  474543 out.go:252]   - Generating certificates and keys ...
	I1002 07:56:33.801414  474543 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:56:33.801482  474543 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:56:34.102836  474543 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 07:56:34.792643  474543 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 07:56:35.116652  474543 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 07:56:35.906492  474543 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 07:56:36.163592  474543 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 07:56:36.163862  474543 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-759246 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 07:56:36.807757  474543 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 07:56:36.808051  474543 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-759246 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 07:56:37.495961  474543 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 07:56:37.864064  474543 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 07:56:38.094935  474543 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 07:56:38.095188  474543 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:56:38.680675  474543 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:56:39.159982  474543 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:56:39.496099  474543 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:56:40.308494  474543 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:56:40.959292  474543 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:56:40.959911  474543 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:56:40.962554  474543 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:56:40.966690  474543 out.go:252]   - Booting up control plane ...
	I1002 07:56:40.966829  474543 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:56:40.966909  474543 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:56:40.968542  474543 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:56:40.985291  474543 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:56:40.985505  474543 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:56:40.994439  474543 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:56:40.994886  474543 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:56:40.995047  474543 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:56:41.137678  474543 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:56:41.137794  474543 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:56:44.138808  474543 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 3.001684521s
	I1002 07:56:44.142463  474543 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:56:44.142555  474543 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 07:56:44.142647  474543 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:56:44.142738  474543 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:56:46.272786  474543 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.129954011s
	I1002 07:56:48.868571  474543 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.726109111s
	I1002 07:56:50.144038  474543 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00144282s
	I1002 07:56:50.164266  474543 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 07:56:50.180604  474543 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 07:56:50.193506  474543 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 07:56:50.193712  474543 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-759246 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 07:56:50.208630  474543 kubeadm.go:318] [bootstrap-token] Using token: 2coonc.qasibp8ejruwjz0r
	I1002 07:56:50.211748  474543 out.go:252]   - Configuring RBAC rules ...
	I1002 07:56:50.211875  474543 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 07:56:50.216234  474543 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 07:56:50.225385  474543 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 07:56:50.229942  474543 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 07:56:50.234414  474543 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 07:56:50.240854  474543 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 07:56:50.553176  474543 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 07:56:50.985790  474543 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 07:56:51.552302  474543 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 07:56:51.553338  474543 kubeadm.go:318] 
	I1002 07:56:51.553413  474543 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 07:56:51.553417  474543 kubeadm.go:318] 
	I1002 07:56:51.553493  474543 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 07:56:51.553496  474543 kubeadm.go:318] 
	I1002 07:56:51.553520  474543 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 07:56:51.553578  474543 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 07:56:51.553627  474543 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 07:56:51.553630  474543 kubeadm.go:318] 
	I1002 07:56:51.553682  474543 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 07:56:51.553686  474543 kubeadm.go:318] 
	I1002 07:56:51.553732  474543 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 07:56:51.553735  474543 kubeadm.go:318] 
	I1002 07:56:51.553786  474543 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 07:56:51.553859  474543 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 07:56:51.553930  474543 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 07:56:51.553933  474543 kubeadm.go:318] 
	I1002 07:56:51.554017  474543 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 07:56:51.554092  474543 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 07:56:51.554095  474543 kubeadm.go:318] 
	I1002 07:56:51.554177  474543 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2coonc.qasibp8ejruwjz0r \
	I1002 07:56:51.554278  474543 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 07:56:51.554576  474543 kubeadm.go:318] 	--control-plane 
	I1002 07:56:51.554585  474543 kubeadm.go:318] 
	I1002 07:56:51.554694  474543 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 07:56:51.554700  474543 kubeadm.go:318] 
	I1002 07:56:51.554787  474543 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2coonc.qasibp8ejruwjz0r \
	I1002 07:56:51.554888  474543 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 07:56:51.559742  474543 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:56:51.560001  474543 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:56:51.560122  474543 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:56:51.560141  474543 cni.go:84] Creating CNI manager for ""
	I1002 07:56:51.560157  474543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:56:51.563299  474543 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 07:56:51.566238  474543 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 07:56:51.571938  474543 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 07:56:51.571949  474543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 07:56:51.585102  474543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 07:56:51.925702  474543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 07:56:51.925770  474543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 07:56:51.925815  474543 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-759246 minikube.k8s.io/updated_at=2025_10_02T07_56_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=cert-expiration-759246 minikube.k8s.io/primary=true
	I1002 07:56:52.110817  474543 ops.go:34] apiserver oom_adj: -16
	I1002 07:56:52.110857  474543 kubeadm.go:1113] duration metric: took 185.141224ms to wait for elevateKubeSystemPrivileges
	I1002 07:56:52.110879  474543 kubeadm.go:402] duration metric: took 18.581275717s to StartCluster
	I1002 07:56:52.110896  474543 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:52.110960  474543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:56:52.111655  474543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:56:52.111890  474543 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:56:52.111983  474543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 07:56:52.112252  474543 config.go:182] Loaded profile config "cert-expiration-759246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:56:52.112294  474543 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:56:52.112357  474543 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-759246"
	I1002 07:56:52.112372  474543 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-759246"
	I1002 07:56:52.112393  474543 host.go:66] Checking if "cert-expiration-759246" exists ...
	I1002 07:56:52.112906  474543 cli_runner.go:164] Run: docker container inspect cert-expiration-759246 --format={{.State.Status}}
	I1002 07:56:52.113256  474543 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-759246"
	I1002 07:56:52.113267  474543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-759246"
	I1002 07:56:52.113539  474543 cli_runner.go:164] Run: docker container inspect cert-expiration-759246 --format={{.State.Status}}
	I1002 07:56:52.117439  474543 out.go:179] * Verifying Kubernetes components...
	I1002 07:56:52.120599  474543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:56:52.145312  474543 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:56:52.148319  474543 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:56:52.148331  474543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:56:52.148407  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:52.175248  474543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa Username:docker}
	I1002 07:56:52.176066  474543 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-759246"
	I1002 07:56:52.176097  474543 host.go:66] Checking if "cert-expiration-759246" exists ...
	I1002 07:56:52.176543  474543 cli_runner.go:164] Run: docker container inspect cert-expiration-759246 --format={{.State.Status}}
	I1002 07:56:52.206803  474543 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:56:52.206816  474543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:56:52.206881  474543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:56:52.232814  474543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa Username:docker}
	I1002 07:56:52.372108  474543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 07:56:52.414135  474543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:56:52.420795  474543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:56:52.462519  474543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:56:52.752721  474543 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 07:56:52.754177  474543 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:56:52.754228  474543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:56:53.048109  474543 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1002 07:56:53.048242  474543 api_server.go:72] duration metric: took 936.329201ms to wait for apiserver process to appear ...
	I1002 07:56:53.048258  474543 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:56:53.048279  474543 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 07:56:53.051072  474543 addons.go:514] duration metric: took 938.756053ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 07:56:53.062643  474543 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 07:56:53.065114  474543 api_server.go:141] control plane version: v1.34.1
	I1002 07:56:53.065133  474543 api_server.go:131] duration metric: took 16.86965ms to wait for apiserver health ...
	I1002 07:56:53.065140  474543 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:56:53.067928  474543 system_pods.go:59] 5 kube-system pods found
	I1002 07:56:53.067949  474543 system_pods.go:61] "etcd-cert-expiration-759246" [e4fda545-1ea5-4a78-8d88-8650c1535319] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 07:56:53.067957  474543 system_pods.go:61] "kube-apiserver-cert-expiration-759246" [f989a240-b8fa-46f7-b318-1768cdcc2960] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:56:53.067963  474543 system_pods.go:61] "kube-controller-manager-cert-expiration-759246" [d5305eb9-b897-4ef3-a41b-763cbe6e6cd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:56:53.067970  474543 system_pods.go:61] "kube-scheduler-cert-expiration-759246" [8ceecd6c-73ba-483e-af65-b54944d71493] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 07:56:53.067974  474543 system_pods.go:61] "storage-provisioner" [38735ebb-90d5-4f99-b991-ae052e00a5d2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 07:56:53.067979  474543 system_pods.go:74] duration metric: took 2.83429ms to wait for pod list to return data ...
	I1002 07:56:53.067991  474543 kubeadm.go:586] duration metric: took 956.080149ms to wait for: map[apiserver:true system_pods:true]
	I1002 07:56:53.068003  474543 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:56:53.072281  474543 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:56:53.072306  474543 node_conditions.go:123] node cpu capacity is 2
	I1002 07:56:53.072317  474543 node_conditions.go:105] duration metric: took 4.310439ms to run NodePressure ...
	I1002 07:56:53.072329  474543 start.go:241] waiting for startup goroutines ...
	I1002 07:56:53.258197  474543 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-759246" context rescaled to 1 replicas
	I1002 07:56:53.258228  474543 start.go:246] waiting for cluster config update ...
	I1002 07:56:53.258239  474543 start.go:255] writing updated cluster config ...
	I1002 07:56:53.258555  474543 ssh_runner.go:195] Run: rm -f paused
	I1002 07:56:53.321293  474543 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 07:56:53.324693  474543 out.go:179] * Done! kubectl is now configured to use "cert-expiration-759246" cluster and "default" namespace by default
	I1002 07:58:08.404444  470112 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	I1002 07:58:08.405346  470112 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	I1002 07:58:08.405984  470112 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	I1002 07:58:08.406006  470112 kubeadm.go:318] 
	I1002 07:58:08.406103  470112 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 07:58:08.406195  470112 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 07:58:08.406297  470112 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 07:58:08.406404  470112 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 07:58:08.406486  470112 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 07:58:08.406581  470112 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 07:58:08.406590  470112 kubeadm.go:318] 
	I1002 07:58:08.411122  470112 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:58:08.411379  470112 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:58:08.411495  470112 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:58:08.412072  470112 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 07:58:08.412152  470112 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 07:58:08.412215  470112 kubeadm.go:402] duration metric: took 8m15.197517544s to StartCluster
	I1002 07:58:08.412276  470112 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:58:08.412345  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:58:08.442645  470112 cri.go:89] found id: ""
	I1002 07:58:08.442719  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.442744  470112 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:58:08.442783  470112 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:58:08.442859  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:58:08.469256  470112 cri.go:89] found id: ""
	I1002 07:58:08.469282  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.469291  470112 logs.go:284] No container was found matching "etcd"
	I1002 07:58:08.469337  470112 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:58:08.469414  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:58:08.495030  470112 cri.go:89] found id: ""
	I1002 07:58:08.495053  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.495061  470112 logs.go:284] No container was found matching "coredns"
	I1002 07:58:08.495067  470112 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:58:08.495189  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:58:08.520327  470112 cri.go:89] found id: ""
	I1002 07:58:08.520353  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.520362  470112 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:58:08.520369  470112 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:58:08.520428  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:58:08.546672  470112 cri.go:89] found id: ""
	I1002 07:58:08.546694  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.546703  470112 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:58:08.546709  470112 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:58:08.546775  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:58:08.573469  470112 cri.go:89] found id: ""
	I1002 07:58:08.573496  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.573505  470112 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:58:08.573512  470112 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:58:08.573569  470112 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:58:08.600071  470112 cri.go:89] found id: ""
	I1002 07:58:08.600094  470112 logs.go:282] 0 containers: []
	W1002 07:58:08.600103  470112 logs.go:284] No container was found matching "kindnet"
	I1002 07:58:08.600112  470112 logs.go:123] Gathering logs for kubelet ...
	I1002 07:58:08.600127  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:58:08.693517  470112 logs.go:123] Gathering logs for dmesg ...
	I1002 07:58:08.693554  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:58:08.709758  470112 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:58:08.709831  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:58:08.779349  470112 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:58:08.770058    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.770852    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.772399    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.772883    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.774392    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:58:08.770058    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.770852    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.772399    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.772883    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:08.774392    2379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:58:08.779412  470112 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:58:08.779440  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:58:08.857762  470112 logs.go:123] Gathering logs for container status ...
	I1002 07:58:08.857798  470112 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:58:08.888962  470112 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00400494s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 07:58:08.889012  470112 out.go:285] * 
	W1002 07:58:08.889078  470112 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00400494s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:58:08.889096  470112 out.go:285] * 
	W1002 07:58:08.891288  470112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:58:08.899113  470112 out.go:203] 
	W1002 07:58:08.902048  470112 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00400494s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000240271s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000171726s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000579483s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 07:58:08.902077  470112 out.go:285] * 
	I1002 07:58:08.905306  470112 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:58:01 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:01.389507565Z" level=info msg="createCtr: removing container 89db745f4a5f00b01d784d232fe19a32f37bf02d7b1583e823c88a4544522aec" id=5b222618-4d5d-47fc-a194-a4ec15f64a4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:01 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:01.389605298Z" level=info msg="createCtr: deleting container 89db745f4a5f00b01d784d232fe19a32f37bf02d7b1583e823c88a4544522aec from storage" id=5b222618-4d5d-47fc-a194-a4ec15f64a4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:01 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:01.394232958Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-env-297062_kube-system_c1ee4c881e1e09bc75c6dfd41e93a020_0" id=5b222618-4d5d-47fc-a194-a4ec15f64a4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.365378922Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=32db6303-40cc-4c7d-b103-168622c8faa7 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.366233502Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1a33fbb9-d0ef-4fce-98d5-0bf43f675bf9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.367135065Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-env-297062/kube-apiserver" id=103b9789-39ca-4265-b1ce-1c4e96c8cfa2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.367360224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.373575075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.374081082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.38430268Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=103b9789-39ca-4265-b1ce-1c4e96c8cfa2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.386090857Z" level=info msg="createCtr: deleting container ID 5ef77ead25516a26797a9d5df76800a7c8367b62f5c97bdc3f2bab29e4e893d8 from idIndex" id=103b9789-39ca-4265-b1ce-1c4e96c8cfa2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.38613441Z" level=info msg="createCtr: removing container 5ef77ead25516a26797a9d5df76800a7c8367b62f5c97bdc3f2bab29e4e893d8" id=103b9789-39ca-4265-b1ce-1c4e96c8cfa2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.3861692Z" level=info msg="createCtr: deleting container 5ef77ead25516a26797a9d5df76800a7c8367b62f5c97bdc3f2bab29e4e893d8 from storage" id=103b9789-39ca-4265-b1ce-1c4e96c8cfa2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:03 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:03.389094642Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-env-297062_kube-system_689cdb6697813f82ad577e618d54aa93_0" id=103b9789-39ca-4265-b1ce-1c4e96c8cfa2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.365919986Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c5bb450c-d265-4986-b958-0b0678eefd27 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.367264228Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ee39f08e-e636-49a2-9117-6ea3e2a8d45e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.368146837Z" level=info msg="Creating container: kube-system/etcd-force-systemd-env-297062/etcd" id=66ba2669-8ad0-4eac-871a-1c21d7eabde8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.368380209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.372846235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.37345118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.390801944Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=66ba2669-8ad0-4eac-871a-1c21d7eabde8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.392989598Z" level=info msg="createCtr: deleting container ID 9a0d8a4f22c652c1a96c6f7804ede46738fbf84c01a7ce922259fc8c3d0e5466 from idIndex" id=66ba2669-8ad0-4eac-871a-1c21d7eabde8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.393037459Z" level=info msg="createCtr: removing container 9a0d8a4f22c652c1a96c6f7804ede46738fbf84c01a7ce922259fc8c3d0e5466" id=66ba2669-8ad0-4eac-871a-1c21d7eabde8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.393076459Z" level=info msg="createCtr: deleting container 9a0d8a4f22c652c1a96c6f7804ede46738fbf84c01a7ce922259fc8c3d0e5466 from storage" id=66ba2669-8ad0-4eac-871a-1c21d7eabde8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:58:07 force-systemd-env-297062 crio[840]: time="2025-10-02T07:58:07.396076954Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-env-297062_kube-system_93ce0fdca166dc809f7840511688f031_0" id=66ba2669-8ad0-4eac-871a-1c21d7eabde8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:58:09.998200    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:09.998895    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:10.003187    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:10.004213    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:58:10.004900    2492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:16] overlayfs: idmapped layers are currently not supported
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:30] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:31] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:58:10 up  2:40,  0 user,  load average: 0.57, 0.82, 1.47
	Linux force-systemd-env-297062 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 07:58:01 force-systemd-env-297062 kubelet[1803]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-env-297062_kube-system(c1ee4c881e1e09bc75c6dfd41e93a020): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:58:01 force-systemd-env-297062 kubelet[1803]:  > logger="UnhandledError"
	Oct 02 07:58:01 force-systemd-env-297062 kubelet[1803]: E1002 07:58:01.394725    1803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-env-297062" podUID="c1ee4c881e1e09bc75c6dfd41e93a020"
	Oct 02 07:58:03 force-systemd-env-297062 kubelet[1803]: E1002 07:58:03.364939    1803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-297062\" not found" node="force-systemd-env-297062"
	Oct 02 07:58:03 force-systemd-env-297062 kubelet[1803]: E1002 07:58:03.389956    1803 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:58:03 force-systemd-env-297062 kubelet[1803]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:58:03 force-systemd-env-297062 kubelet[1803]:  > podSandboxID="6bc7c11d3f5398f380d2c66bed861db33b8cfbe2c4f7745bc26da88228376f83"
	Oct 02 07:58:03 force-systemd-env-297062 kubelet[1803]: E1002 07:58:03.390111    1803 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:58:03 force-systemd-env-297062 kubelet[1803]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-env-297062_kube-system(689cdb6697813f82ad577e618d54aa93): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:58:03 force-systemd-env-297062 kubelet[1803]:  > logger="UnhandledError"
	Oct 02 07:58:03 force-systemd-env-297062 kubelet[1803]: E1002 07:58:03.390168    1803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-env-297062" podUID="689cdb6697813f82ad577e618d54aa93"
	Oct 02 07:58:05 force-systemd-env-297062 kubelet[1803]: E1002 07:58:05.021258    1803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-297062?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:58:05 force-systemd-env-297062 kubelet[1803]: I1002 07:58:05.198630    1803 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-297062"
	Oct 02 07:58:05 force-systemd-env-297062 kubelet[1803]: E1002 07:58:05.199074    1803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="force-systemd-env-297062"
	Oct 02 07:58:05 force-systemd-env-297062 kubelet[1803]: E1002 07:58:05.533732    1803 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.76.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 07:58:05 force-systemd-env-297062 kubelet[1803]: E1002 07:58:05.731646    1803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-297062.186a9d61f024f5e6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-297062,UID:force-systemd-env-297062,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-297062 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-297062,},FirstTimestamp:2025-10-02 07:54:08.40794263 +0000 UTC m=+1.013442080,LastTimestamp:2025-10-02 07:54:08.40794263 +0000 UTC m=+1.013442080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,R
eportingInstance:force-systemd-env-297062,}"
	Oct 02 07:58:07 force-systemd-env-297062 kubelet[1803]: E1002 07:58:07.365422    1803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-297062\" not found" node="force-systemd-env-297062"
	Oct 02 07:58:07 force-systemd-env-297062 kubelet[1803]: E1002 07:58:07.396410    1803 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 07:58:07 force-systemd-env-297062 kubelet[1803]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:58:07 force-systemd-env-297062 kubelet[1803]:  > podSandboxID="380576b2e1e06e88ed07f850d8bcc384a78910284914d9a96d0cb71ef39a4000"
	Oct 02 07:58:07 force-systemd-env-297062 kubelet[1803]: E1002 07:58:07.396501    1803 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 07:58:07 force-systemd-env-297062 kubelet[1803]:         container etcd start failed in pod etcd-force-systemd-env-297062_kube-system(93ce0fdca166dc809f7840511688f031): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 07:58:07 force-systemd-env-297062 kubelet[1803]:  > logger="UnhandledError"
	Oct 02 07:58:07 force-systemd-env-297062 kubelet[1803]: E1002 07:58:07.396530    1803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-env-297062" podUID="93ce0fdca166dc809f7840511688f031"
	Oct 02 07:58:08 force-systemd-env-297062 kubelet[1803]: E1002 07:58:08.440584    1803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-297062\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-297062 -n force-systemd-env-297062
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-297062 -n force-systemd-env-297062: exit status 6 (342.598878ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:58:10.473541  477518 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-297062" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-297062" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-297062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-297062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-297062: (1.951730772s)
--- FAIL: TestForceSystemdEnv (512.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-615837 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-615837 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-579rb" [cbac82ce-18b2-4a6f-b8b2-c8d0d3390b0e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 06:54:28.907755  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:54:56.612246  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:59:28.908509  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-615837 -n functional-615837
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-02 07:02:19.344439933 +0000 UTC m=+1259.690719499
functional_test.go:1645: (dbg) Run:  kubectl --context functional-615837 describe po hello-node-connect-7d85dfc575-579rb -n default
functional_test.go:1645: (dbg) kubectl --context functional-615837 describe po hello-node-connect-7d85dfc575-579rb -n default:
Name:             hello-node-connect-7d85dfc575-579rb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-615837/192.168.49.2
Start Time:       Thu, 02 Oct 2025 06:52:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gr9p7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gr9p7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-579rb to functional-615837
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-615837 logs hello-node-connect-7d85dfc575-579rb -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-615837 logs hello-node-connect-7d85dfc575-579rb -n default: exit status 1 (88.729944ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-579rb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-615837 logs hello-node-connect-7d85dfc575-579rb -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-615837 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-579rb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-615837/192.168.49.2
Start Time:       Thu, 02 Oct 2025 06:52:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gr9p7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gr9p7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-579rb to functional-615837
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-615837 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-615837 logs -l app=hello-node-connect: exit status 1 (110.856399ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-579rb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-615837 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-615837 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.143.175
IPs:                      10.104.143.175
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31739/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-615837
helpers_test.go:243: (dbg) docker inspect functional-615837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ab4e675992f9bcd6a8e0920851f0f5d5022bfcfec22e500eb4570e890ae1cdd",
	        "Created": "2025-10-02T06:48:48.331264848Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309955,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:48:48.41177139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/9ab4e675992f9bcd6a8e0920851f0f5d5022bfcfec22e500eb4570e890ae1cdd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ab4e675992f9bcd6a8e0920851f0f5d5022bfcfec22e500eb4570e890ae1cdd/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ab4e675992f9bcd6a8e0920851f0f5d5022bfcfec22e500eb4570e890ae1cdd/hosts",
	        "LogPath": "/var/lib/docker/containers/9ab4e675992f9bcd6a8e0920851f0f5d5022bfcfec22e500eb4570e890ae1cdd/9ab4e675992f9bcd6a8e0920851f0f5d5022bfcfec22e500eb4570e890ae1cdd-json.log",
	        "Name": "/functional-615837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-615837:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-615837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9ab4e675992f9bcd6a8e0920851f0f5d5022bfcfec22e500eb4570e890ae1cdd",
	                "LowerDir": "/var/lib/docker/overlay2/4b4ccf823121a2ed1c117103955a039217d997913c75520983ac87f5558b7f3c-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b4ccf823121a2ed1c117103955a039217d997913c75520983ac87f5558b7f3c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b4ccf823121a2ed1c117103955a039217d997913c75520983ac87f5558b7f3c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b4ccf823121a2ed1c117103955a039217d997913c75520983ac87f5558b7f3c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-615837",
	                "Source": "/var/lib/docker/volumes/functional-615837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-615837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-615837",
	                "name.minikube.sigs.k8s.io": "functional-615837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ebbcdee37ba3d96ec93c8c8650f79f38e712f04965ad3b79e6fa3ffd0565bda",
	            "SandboxKey": "/var/run/docker/netns/1ebbcdee37ba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-615837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:14:60:97:e5:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e077fa3f403b8b37eb4944d1054738ca3b28182fa39bf2962a55065d46e2eb59",
	                    "EndpointID": "647c80412107bd68fd7f7f95784df6f23fd32adad48d4ccbf4ba029acab90603",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-615837",
	                        "9ab4e675992f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-615837 -n functional-615837
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-615837 logs -n 25: (1.53945214s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-615837 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ ssh            │ functional-615837 ssh -- ls -la /mount-9p                                                                          │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ ssh            │ functional-615837 ssh sudo umount -f /mount-9p                                                                     │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ ssh            │ functional-615837 ssh findmnt -T /mount1                                                                           │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ mount          │ -p functional-615837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2398992057/001:/mount2 --alsologtostderr -v=1 │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ mount          │ -p functional-615837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2398992057/001:/mount1 --alsologtostderr -v=1 │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ mount          │ -p functional-615837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2398992057/001:/mount3 --alsologtostderr -v=1 │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │                     │
	│ ssh            │ functional-615837 ssh findmnt -T /mount1                                                                           │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ ssh            │ functional-615837 ssh findmnt -T /mount2                                                                           │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ ssh            │ functional-615837 ssh findmnt -T /mount3                                                                           │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ mount          │ -p functional-615837 --kill=true                                                                                   │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ start          │ -p functional-615837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ start          │ -p functional-615837 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ start          │ -p functional-615837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-615837 --alsologtostderr -v=1                                                     │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ update-context │ functional-615837 update-context --alsologtostderr -v=2                                                            │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ update-context │ functional-615837 update-context --alsologtostderr -v=2                                                            │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ update-context │ functional-615837 update-context --alsologtostderr -v=2                                                            │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ image          │ functional-615837 image ls --format short --alsologtostderr                                                        │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ ssh            │ functional-615837 ssh pgrep buildkitd                                                                              │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │                     │
	│ image          │ functional-615837 image build -t localhost/my-image:functional-615837 testdata/build --alsologtostderr             │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ image          │ functional-615837 image ls                                                                                         │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ image          │ functional-615837 image ls --format yaml --alsologtostderr                                                         │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ image          │ functional-615837 image ls --format json --alsologtostderr                                                         │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ image          │ functional-615837 image ls --format table --alsologtostderr                                                        │ functional-615837 │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:02:02
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:02:02.087228  321911 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:02:02.087411  321911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:02:02.087434  321911 out.go:374] Setting ErrFile to fd 2...
	I1002 07:02:02.087454  321911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:02:02.088497  321911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:02:02.088963  321911 out.go:368] Setting JSON to false
	I1002 07:02:02.089952  321911 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6273,"bootTime":1759382249,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:02:02.090065  321911 start.go:140] virtualization:  
	I1002 07:02:02.093472  321911 out.go:179] * [functional-615837] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 07:02:02.096729  321911 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:02:02.096774  321911 notify.go:220] Checking for updates...
	I1002 07:02:02.102843  321911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:02:02.105883  321911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:02:02.108774  321911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:02:02.111627  321911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:02:02.115180  321911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:02:02.118496  321911 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:02:02.119047  321911 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:02:02.160089  321911 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:02:02.160250  321911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:02:02.225359  321911 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:02:02.214995625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:02:02.225477  321911 docker.go:318] overlay module found
	I1002 07:02:02.228526  321911 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 07:02:02.231295  321911 start.go:304] selected driver: docker
	I1002 07:02:02.231320  321911 start.go:924] validating driver "docker" against &{Name:functional-615837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-615837 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:02:02.231428  321911 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:02:02.235066  321911 out.go:203] 
	W1002 07:02:02.238004  321911 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 07:02:02.240818  321911 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.03504697Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf" id=02de44e8-3468-47e5-b70b-d9dfe53cad2d name=/runtime.v1.ImageService/PullImage
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.03561513Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3b5bd028-d81d-4d73-a112-30c69d6d1a7c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.038903786Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=3e889ce0-0a2d-481a-bf0a-ed8a9e53c938 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.040393068Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=f6488553-cec2-40a7-96d6-dce1cd04359e name=/runtime.v1.ImageService/PullImage
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.043728428Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.046861693Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-prkvg/kubernetes-dashboard" id=13bd3840-1cb3-439f-9c37-4a5d02714adf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.047775472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.053231167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.053724726Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/93cb4898c96f9639448ed796b2ee3ac9e11d7a40935f77c29ebfe597b5e4b554/merged/etc/group: no such file or directory"
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.054311848Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.071365116Z" level=info msg="Created container accba3cf68c53105c148606b77597242c27f8cfb0c1d1b4c696fba72a484aa1f: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-prkvg/kubernetes-dashboard" id=13bd3840-1cb3-439f-9c37-4a5d02714adf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.074528076Z" level=info msg="Starting container: accba3cf68c53105c148606b77597242c27f8cfb0c1d1b4c696fba72a484aa1f" id=b12a78b0-af3e-4eb8-867c-480c01afd486 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.078274598Z" level=info msg="Started container" PID=7047 containerID=accba3cf68c53105c148606b77597242c27f8cfb0c1d1b4c696fba72a484aa1f description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-prkvg/kubernetes-dashboard id=b12a78b0-af3e-4eb8-867c-480c01afd486 name=/runtime.v1.RuntimeService/StartContainer sandboxID=136a79e20e2ba6b4345be07e2776575925a4d453ca7221e7aa6c239c35b768ed
	Oct 02 07:02:08 functional-615837 crio[3548]: time="2025-10-02T07:02:08.287108118Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.388104353Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a" id=f6488553-cec2-40a7-96d6-dce1cd04359e name=/runtime.v1.ImageService/PullImage
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.389169189Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=7b6d8c2d-add9-45fb-9a35-9238b9cf64e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.392650584Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=b76aedda-b5d4-4e8f-87bc-e8f6ab42cf93 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.399589786Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hgxpm/dashboard-metrics-scraper" id=65f69de0-2370-4a27-8046-01f1488d7f58 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.400403765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.405685329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.405938763Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a2a64ccc63c1ab36fa243a246295c1c012dcb229163d1ce06339995cb87f1768/merged/etc/group: no such file or directory"
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.406294729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.421403261Z" level=info msg="Created container a8322b8131ced2ef946b0ab5538bcfd9bb116cb28d3a74f6b6c7d635c00ca118: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hgxpm/dashboard-metrics-scraper" id=65f69de0-2370-4a27-8046-01f1488d7f58 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.424012518Z" level=info msg="Starting container: a8322b8131ced2ef946b0ab5538bcfd9bb116cb28d3a74f6b6c7d635c00ca118" id=7ff2a20f-2f82-433f-b736-8e7b0aa56e4e name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:02:09 functional-615837 crio[3548]: time="2025-10-02T07:02:09.427882799Z" level=info msg="Started container" PID=7089 containerID=a8322b8131ced2ef946b0ab5538bcfd9bb116cb28d3a74f6b6c7d635c00ca118 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hgxpm/dashboard-metrics-scraper id=7ff2a20f-2f82-433f-b736-8e7b0aa56e4e name=/runtime.v1.RuntimeService/StartContainer sandboxID=a565a0e714ad5dcdf575219f729677b128bbeeb96862a19d7615bbd99eea7354
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a8322b8131ced       docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a   11 seconds ago      Running             dashboard-metrics-scraper   0                   a565a0e714ad5       dashboard-metrics-scraper-77bf4d6c4c-hgxpm   kubernetes-dashboard
	accba3cf68c53       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf         12 seconds ago      Running             kubernetes-dashboard        0                   136a79e20e2ba       kubernetes-dashboard-855c9754f9-prkvg        kubernetes-dashboard
	f5c98f826c5b8       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e              27 seconds ago      Exited              mount-munger                0                   7b0ce95e2f52f       busybox-mount                                default
	270daedbbe6fd       docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992                  10 minutes ago      Running             myfrontend                  0                   4774dd79093c4       sp-pod                                       default
	857a13bf1f88a       docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac                  10 minutes ago      Running             nginx                       0                   dcac121bc1d9b       nginx-svc                                    default
	6e8eb5461fe17       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         3                   ab32670780587       storage-provisioner                          kube-system
	df0f34027812f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 3                   9a91dada4338d       kindnet-6r9nz                                kube-system
	2551da9c23ea8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Running             kube-proxy                  3                   68b59f86e172e       kube-proxy-xzwzg                             kube-system
	3e366dfdad2e0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                 11 minutes ago      Running             kube-apiserver              0                   36f65e92d3c31       kube-apiserver-functional-615837             kube-system
	7ae79e29ea2de       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Running             kube-controller-manager     3                   f818c467d6d3d       kube-controller-manager-functional-615837    kube-system
	11223a8a62be8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Running             kube-scheduler              3                   7ca3ad7398321       kube-scheduler-functional-615837             kube-system
	72c0fc9df3bef       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        3                   817172b86b844       etcd-functional-615837                       kube-system
	2f416597e2adb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   6ee20265628b6       coredns-66bc5c9577-p6cg4                     kube-system
	573fd72a612c7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Exited              etcd                        2                   817172b86b844       etcd-functional-615837                       kube-system
	3fec1b270d9cd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Exited              kube-scheduler              2                   7ca3ad7398321       kube-scheduler-functional-615837             kube-system
	a766504830fbb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Exited              kube-controller-manager     2                   f818c467d6d3d       kube-controller-manager-functional-615837    kube-system
	b6e29d695f30c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Exited              storage-provisioner         2                   ab32670780587       storage-provisioner                          kube-system
	ce64082c34151       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Exited              kube-proxy                  2                   68b59f86e172e       kube-proxy-xzwzg                             kube-system
	d99e4dcb6f125       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Exited              kindnet-cni                 2                   9a91dada4338d       kindnet-6r9nz                                kube-system
	cbeaaf485b99e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 12 minutes ago      Exited              coredns                     1                   6ee20265628b6       coredns-66bc5c9577-p6cg4                     kube-system
	
	
	==> coredns [2f416597e2adbb4dd110b5666960ac58a30766b3369fda26c2bfbb63d938b0b8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43443 - 27614 "HINFO IN 1950895920748800981.7039194540545720010. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021370408s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [cbeaaf485b99e76a89bda3f35eabc4d6c7836c07751df85b54349be0c7722f6e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34397 - 30757 "HINFO IN 4985108354668878122.3443187719627583076. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013865601s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-615837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-615837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-615837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_49_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:49:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-615837
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:02:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:02:19 +0000   Thu, 02 Oct 2025 06:49:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:02:19 +0000   Thu, 02 Oct 2025 06:49:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:02:19 +0000   Thu, 02 Oct 2025 06:49:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:02:19 +0000   Thu, 02 Oct 2025 06:50:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-615837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 585aeca25ef948299215589877d73abc
	  System UUID:                5eb2a4c6-aa91-4ca6-b85c-5f521a655bb7
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wkwvf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-579rb           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-p6cg4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-functional-615837                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-6r9nz                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-functional-615837              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-615837     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-xzwzg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-615837              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-hgxpm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-prkvg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-615837 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-615837 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-615837 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                node-controller  Node functional-615837 event: Registered Node functional-615837 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-615837 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-615837 event: Registered Node functional-615837 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-615837 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-615837 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-615837 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-615837 event: Registered Node functional-615837 in Controller
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 06:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 06:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [573fd72a612c704d0fe0d55a261e807d7534501fa734e39e54db9f70055da84b] <==
	{"level":"warn","ts":"2025-10-02T06:50:58.693619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:50:58.714588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:50:58.751430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:50:58.787269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:50:58.817299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:50:58.850193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:50:58.955666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41626","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:51:06.771958Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T06:51:06.772005Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-615837","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T06:51:06.772141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T06:51:06.773829Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T06:51:06.773909Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:51:06.773930Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T06:51:06.773992Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T06:51:06.774002Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T06:51:06.774026Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T06:51:06.774066Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T06:51:06.774074Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T06:51:06.774112Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T06:51:06.774121Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T06:51:06.774127Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:51:06.778105Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T06:51:06.778193Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:51:06.778228Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T06:51:06.778235Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-615837","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [72c0fc9df3bef252963fc8e3df9a332f9158be439b92e2d72073b6f9e6da5843] <==
	{"level":"warn","ts":"2025-10-02T06:51:14.512327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.523668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.545996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.576641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.592622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.603667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.634334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.654955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.694508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.718392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.732427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.753462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.763721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.782102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.803972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.816414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.831931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.855229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.894422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.911570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.933755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:51:14.995176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37638","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:01:13.740490Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1101}
	{"level":"info","ts":"2025-10-02T07:01:13.764387Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1101,"took":"23.597932ms","hash":3716964096,"current-db-size-bytes":3223552,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1359872,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-02T07:01:13.764538Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3716964096,"revision":1101,"compact-revision":-1}
	
	
	==> kernel <==
	 07:02:21 up  1:44,  0 user,  load average: 0.79, 0.64, 1.54
	Linux functional-615837 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d99e4dcb6f125d3020e8dc2d5d030936af0e9145a252aff1e5284714fa5f1c66] <==
	I1002 06:50:55.810142       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 06:50:55.810349       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 06:50:55.810480       1 main.go:148] setting mtu 1500 for CNI 
	I1002 06:50:55.810491       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 06:50:55.810504       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T06:50:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1002 06:50:56.135313       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 06:50:56.135510       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 06:50:56.135594       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 06:50:56.135636       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 06:50:56.139513       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 06:50:56.141151       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 06:50:56.143810       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 06:50:56.146754       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1002 06:51:00.237752       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 06:51:00.237848       1 metrics.go:72] Registering metrics
	I1002 06:51:00.237948       1 controller.go:711] "Syncing nftables rules"
	I1002 06:51:06.131454       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:51:06.131493       1 main.go:301] handling current node
	
	
	==> kindnet [df0f34027812f280b109fab7b8b65797826eaf0f400eedf0a04f864d9dce2a58] <==
	I1002 07:00:18.235892       1 main.go:301] handling current node
	I1002 07:00:28.230763       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:28.230802       1 main.go:301] handling current node
	I1002 07:00:38.235866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:38.235903       1 main.go:301] handling current node
	I1002 07:00:48.231251       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:48.231305       1 main.go:301] handling current node
	I1002 07:00:58.230011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:58.230116       1 main.go:301] handling current node
	I1002 07:01:08.230879       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:08.230919       1 main.go:301] handling current node
	I1002 07:01:18.233895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:18.234009       1 main.go:301] handling current node
	I1002 07:01:28.231610       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:28.231909       1 main.go:301] handling current node
	I1002 07:01:38.230675       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:38.230711       1 main.go:301] handling current node
	I1002 07:01:48.232083       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:48.232118       1 main.go:301] handling current node
	I1002 07:01:58.230788       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:58.230840       1 main.go:301] handling current node
	I1002 07:02:08.231239       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:02:08.231350       1 main.go:301] handling current node
	I1002 07:02:18.231220       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:02:18.231341       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e366dfdad2e087024ff5b108bf88b66e207a515a6651f0d207f26e63f3e9c52] <==
	I1002 06:51:16.132825       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 06:51:16.132892       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 06:51:16.132932       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 06:51:16.137704       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 06:51:16.156836       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 06:51:16.188367       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 06:51:16.717302       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1002 06:51:17.053682       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 06:51:17.055107       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 06:51:17.066546       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 06:51:17.497198       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 06:51:17.628724       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 06:51:17.698119       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 06:51:17.706642       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 06:51:24.779954       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 06:51:32.474874       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.148.39"}
	I1002 06:51:41.255833       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.226.210"}
	I1002 06:51:44.700902       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.191.198"}
	E1002 06:52:10.532472       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53752: use of closed network connection
	E1002 06:52:18.644980       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53776: use of closed network connection
	I1002 06:52:18.995917       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.143.175"}
	I1002 07:01:16.032443       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:02:03.272264       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 07:02:03.532096       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.116.152"}
	I1002 07:02:03.569449       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.31.200"}
	
	
	==> kube-controller-manager [7ae79e29ea2deb97eb48b51cde2e733c94f6d18aa76211897887fc090409059a] <==
	I1002 06:51:19.419160       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:51:19.419246       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 06:51:19.421453       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 06:51:19.425702       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 06:51:19.428905       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 06:51:19.440287       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 06:51:19.451783       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 06:51:19.454115       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 06:51:19.454212       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 06:51:19.462009       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 06:51:19.462124       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 06:51:19.462144       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 06:51:19.464399       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 06:51:19.464532       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 06:51:19.464654       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-615837"
	I1002 06:51:19.465116       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	E1002 07:02:03.379489       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:02:03.402274       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:02:03.415949       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:02:03.422132       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:02:03.426751       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:02:03.429596       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:02:03.437248       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:02:03.446078       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:02:03.446198       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [a766504830fbb41273dddda0bc3e5f5f1ea1a34086cde02c82a5e2541be0618e] <==
	I1002 06:50:57.504727       1 serving.go:386] Generated self-signed cert in-memory
	I1002 06:50:59.359698       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 06:50:59.360032       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:50:59.361554       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 06:50:59.361722       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 06:50:59.365464       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 06:50:59.365539       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [2551da9c23ea86fc693c779378a97e59142c8b0de42fa695be7ca813f39d85da] <==
	I1002 06:51:18.378372       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:51:18.541297       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:51:18.641511       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:51:18.641559       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:51:18.641648       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:51:18.690321       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:51:18.690379       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:51:18.695170       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:51:18.695452       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:51:18.695472       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:51:18.696507       1 config.go:200] "Starting service config controller"
	I1002 06:51:18.696531       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:51:18.696831       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:51:18.696847       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:51:18.696864       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:51:18.696868       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:51:18.697514       1 config.go:309] "Starting node config controller"
	I1002 06:51:18.697532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:51:18.697545       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:51:18.797147       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:51:18.797187       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 06:51:18.797240       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [ce64082c34151c0bbe58d7a2b8a34bf7bfd67b3d1b6e181f94e3259631e7e388] <==
	E1002 06:51:00.235166       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:51:00.359962       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:51:00.360159       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:51:00.366397       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:51:00.366802       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:51:00.367037       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:51:00.368461       1 config.go:200] "Starting service config controller"
	I1002 06:51:00.368571       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:51:00.368620       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:51:00.368648       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:51:00.368684       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:51:00.368713       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	E1002 06:51:00.386238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	I1002 06:51:00.386906       1 config.go:309] "Starting node config controller"
	I1002 06:51:00.386992       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:51:00.387027       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1002 06:51:00.388147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1002 06:51:00.388344       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 06:51:00.388598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:51:01.193278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:51:01.237561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1002 06:51:01.933407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1002 06:51:03.423338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8441/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1002 06:51:03.498407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:51:03.550954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-scheduler [11223a8a62be87038c364dfa06d1d7188f69734383fa0b497c19a4081f77088d] <==
	I1002 06:51:14.734571       1 serving.go:386] Generated self-signed cert in-memory
	I1002 06:51:16.653827       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 06:51:16.653859       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:51:16.660045       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 06:51:16.661769       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 06:51:16.661924       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 06:51:16.661996       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 06:51:16.664692       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:51:16.666286       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:51:16.667276       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 06:51:16.667390       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 06:51:16.763004       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 06:51:16.767652       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 06:51:16.767779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [3fec1b270d9cd76eea5a4ab1835d950f13045f8fecf375c080501979923c8600] <==
	E1002 06:51:04.399267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:51:04.713714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:51:04.744637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:51:04.784682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 06:51:04.846411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:51:05.006563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 06:51:05.090734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:51:05.166842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:51:05.189519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:51:05.201073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:51:05.256404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:51:05.332208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:51:05.361142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:51:05.377942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:51:05.531216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:51:05.550812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:51:05.623006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:51:06.889671       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1002 06:51:06.890079       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 06:51:06.890103       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 06:51:06.890123       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1002 06:51:06.890153       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:51:06.890165       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:51:06.890228       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 06:51:06.890250       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 07:01:28 functional-615837 kubelet[4127]: E1002 07:01:28.049005    4127 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wkwvf" podUID="51ee02da-9ddd-4188-b96d-9b3418215528"
	Oct 02 07:01:35 functional-615837 kubelet[4127]: E1002 07:01:35.049281    4127 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-579rb" podUID="cbac82ce-18b2-4a6f-b8b2-c8d0d3390b0e"
	Oct 02 07:01:40 functional-615837 kubelet[4127]: E1002 07:01:40.048181    4127 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wkwvf" podUID="51ee02da-9ddd-4188-b96d-9b3418215528"
	Oct 02 07:01:50 functional-615837 kubelet[4127]: E1002 07:01:50.048189    4127 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-579rb" podUID="cbac82ce-18b2-4a6f-b8b2-c8d0d3390b0e"
	Oct 02 07:01:51 functional-615837 kubelet[4127]: I1002 07:01:51.523562    4127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/60590501-6c50-42e9-991d-522e7aa90de2-test-volume\") pod \"busybox-mount\" (UID: \"60590501-6c50-42e9-991d-522e7aa90de2\") " pod="default/busybox-mount"
	Oct 02 07:01:51 functional-615837 kubelet[4127]: I1002 07:01:51.523614    4127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2h5\" (UniqueName: \"kubernetes.io/projected/60590501-6c50-42e9-991d-522e7aa90de2-kube-api-access-2t2h5\") pod \"busybox-mount\" (UID: \"60590501-6c50-42e9-991d-522e7aa90de2\") " pod="default/busybox-mount"
	Oct 02 07:01:51 functional-615837 kubelet[4127]: W1002 07:01:51.836615    4127 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ab4e675992f9bcd6a8e0920851f0f5d5022bfcfec22e500eb4570e890ae1cdd/crio-7b0ce95e2f52fb597be8900387716e729d7153bf1697605922810b03cc592a09 WatchSource:0}: Error finding container 7b0ce95e2f52fb597be8900387716e729d7153bf1697605922810b03cc592a09: Status 404 returned error can't find the container with id 7b0ce95e2f52fb597be8900387716e729d7153bf1697605922810b03cc592a09
	Oct 02 07:01:55 functional-615837 kubelet[4127]: E1002 07:01:55.048179    4127 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wkwvf" podUID="51ee02da-9ddd-4188-b96d-9b3418215528"
	Oct 02 07:01:55 functional-615837 kubelet[4127]: I1002 07:01:55.954336    4127 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/60590501-6c50-42e9-991d-522e7aa90de2-test-volume\") pod \"60590501-6c50-42e9-991d-522e7aa90de2\" (UID: \"60590501-6c50-42e9-991d-522e7aa90de2\") "
	Oct 02 07:01:55 functional-615837 kubelet[4127]: I1002 07:01:55.954394    4127 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t2h5\" (UniqueName: \"kubernetes.io/projected/60590501-6c50-42e9-991d-522e7aa90de2-kube-api-access-2t2h5\") pod \"60590501-6c50-42e9-991d-522e7aa90de2\" (UID: \"60590501-6c50-42e9-991d-522e7aa90de2\") "
	Oct 02 07:01:55 functional-615837 kubelet[4127]: I1002 07:01:55.954590    4127 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60590501-6c50-42e9-991d-522e7aa90de2-test-volume" (OuterVolumeSpecName: "test-volume") pod "60590501-6c50-42e9-991d-522e7aa90de2" (UID: "60590501-6c50-42e9-991d-522e7aa90de2"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 02 07:01:55 functional-615837 kubelet[4127]: I1002 07:01:55.956443    4127 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60590501-6c50-42e9-991d-522e7aa90de2-kube-api-access-2t2h5" (OuterVolumeSpecName: "kube-api-access-2t2h5") pod "60590501-6c50-42e9-991d-522e7aa90de2" (UID: "60590501-6c50-42e9-991d-522e7aa90de2"). InnerVolumeSpecName "kube-api-access-2t2h5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 07:01:56 functional-615837 kubelet[4127]: I1002 07:01:56.055035    4127 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/60590501-6c50-42e9-991d-522e7aa90de2-test-volume\") on node \"functional-615837\" DevicePath \"\""
	Oct 02 07:01:56 functional-615837 kubelet[4127]: I1002 07:01:56.055107    4127 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2t2h5\" (UniqueName: \"kubernetes.io/projected/60590501-6c50-42e9-991d-522e7aa90de2-kube-api-access-2t2h5\") on node \"functional-615837\" DevicePath \"\""
	Oct 02 07:01:56 functional-615837 kubelet[4127]: I1002 07:01:56.819482    4127 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b0ce95e2f52fb597be8900387716e729d7153bf1697605922810b03cc592a09"
	Oct 02 07:02:01 functional-615837 kubelet[4127]: E1002 07:02:01.048955    4127 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-579rb" podUID="cbac82ce-18b2-4a6f-b8b2-c8d0d3390b0e"
	Oct 02 07:02:03 functional-615837 kubelet[4127]: I1002 07:02:03.640081    4127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/029a42a8-05e2-4167-aad6-dc4f0cdac52f-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-prkvg\" (UID: \"029a42a8-05e2-4167-aad6-dc4f0cdac52f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-prkvg"
	Oct 02 07:02:03 functional-615837 kubelet[4127]: I1002 07:02:03.640148    4127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88gz\" (UniqueName: \"kubernetes.io/projected/029a42a8-05e2-4167-aad6-dc4f0cdac52f-kube-api-access-l88gz\") pod \"kubernetes-dashboard-855c9754f9-prkvg\" (UID: \"029a42a8-05e2-4167-aad6-dc4f0cdac52f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-prkvg"
	Oct 02 07:02:03 functional-615837 kubelet[4127]: I1002 07:02:03.640173    4127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d728f8fd-43e9-482b-bc12-18bc8876830b-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-hgxpm\" (UID: \"d728f8fd-43e9-482b-bc12-18bc8876830b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hgxpm"
	Oct 02 07:02:03 functional-615837 kubelet[4127]: I1002 07:02:03.640196    4127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c59hk\" (UniqueName: \"kubernetes.io/projected/d728f8fd-43e9-482b-bc12-18bc8876830b-kube-api-access-c59hk\") pod \"dashboard-metrics-scraper-77bf4d6c4c-hgxpm\" (UID: \"d728f8fd-43e9-482b-bc12-18bc8876830b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hgxpm"
	Oct 02 07:02:03 functional-615837 kubelet[4127]: W1002 07:02:03.886334    4127 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9ab4e675992f9bcd6a8e0920851f0f5d5022bfcfec22e500eb4570e890ae1cdd/crio-a565a0e714ad5dcdf575219f729677b128bbeeb96862a19d7615bbd99eea7354 WatchSource:0}: Error finding container a565a0e714ad5dcdf575219f729677b128bbeeb96862a19d7615bbd99eea7354: Status 404 returned error can't find the container with id a565a0e714ad5dcdf575219f729677b128bbeeb96862a19d7615bbd99eea7354
	Oct 02 07:02:08 functional-615837 kubelet[4127]: E1002 07:02:08.048523    4127 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wkwvf" podUID="51ee02da-9ddd-4188-b96d-9b3418215528"
	Oct 02 07:02:09 functional-615837 kubelet[4127]: I1002 07:02:09.890548    4127 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-prkvg" podStartSLOduration=2.697344976 podStartE2EDuration="6.890527415s" podCreationTimestamp="2025-10-02 07:02:03 +0000 UTC" firstStartedPulling="2025-10-02 07:02:03.844755317 +0000 UTC m=+653.029057794" lastFinishedPulling="2025-10-02 07:02:08.037937748 +0000 UTC m=+657.222240233" observedRunningTime="2025-10-02 07:02:08.890807621 +0000 UTC m=+658.075110114" watchObservedRunningTime="2025-10-02 07:02:09.890527415 +0000 UTC m=+659.074829892"
	Oct 02 07:02:14 functional-615837 kubelet[4127]: E1002 07:02:14.048744    4127 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-579rb" podUID="cbac82ce-18b2-4a6f-b8b2-c8d0d3390b0e"
	Oct 02 07:02:19 functional-615837 kubelet[4127]: E1002 07:02:19.048745    4127 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-wkwvf" podUID="51ee02da-9ddd-4188-b96d-9b3418215528"
	
	
	==> kubernetes-dashboard [accba3cf68c53105c148606b77597242c27f8cfb0c1d1b4c696fba72a484aa1f] <==
	2025/10/02 07:02:08 Starting overwatch
	2025/10/02 07:02:08 Using namespace: kubernetes-dashboard
	2025/10/02 07:02:08 Using in-cluster config to connect to apiserver
	2025/10/02 07:02:08 Using secret token for csrf signing
	2025/10/02 07:02:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 07:02:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 07:02:08 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 07:02:08 Generating JWE encryption key
	2025/10/02 07:02:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 07:02:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 07:02:09 Initializing JWE encryption key from synchronized object
	2025/10/02 07:02:09 Creating in-cluster Sidecar client
	2025/10/02 07:02:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 07:02:09 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [6e8eb5461fe17501d0c56cb4d55f8747745a25f6e98313bd291d993b71198d52] <==
	W1002 07:01:56.378169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:58.381500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:58.388738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:00.410706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:00.417414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:02.421475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:02.426914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:04.430638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:04.437876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:06.441381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:06.446648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:08.450059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:08.467299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:10.470382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:10.478360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:12.482389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:12.487691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:14.491273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:14.499598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:16.503004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:16.507471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:18.510985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:18.515556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:20.518925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:20.527065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b6e29d695f30c555b8331f24495a017311a6861a44963ea29628989e2435a110] <==
	I1002 06:50:55.882784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 06:50:55.888045       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-615837 -n functional-615837
helpers_test.go:269: (dbg) Run:  kubectl --context functional-615837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-wkwvf hello-node-connect-7d85dfc575-579rb
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-615837 describe pod busybox-mount hello-node-75c85bcc94-wkwvf hello-node-connect-7d85dfc575-579rb
helpers_test.go:290: (dbg) kubectl --context functional-615837 describe pod busybox-mount hello-node-75c85bcc94-wkwvf hello-node-connect-7d85dfc575-579rb:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615837/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:01:51 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f5c98f826c5b881af3062244a966803195471747d18552484867ee2a7c0b7da8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 07:01:53 +0000
	      Finished:     Thu, 02 Oct 2025 07:01:53 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2t2h5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2t2h5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  30s   default-scheduler  Successfully assigned default/busybox-mount to functional-615837
	  Normal  Pulling    31s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     29s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.934s (1.934s including waiting). Image size: 3774172 bytes.
	  Normal  Created    29s   kubelet            Created container: mount-munger
	  Normal  Started    29s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-wkwvf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615837/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:51:41 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24c96 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-24c96:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wkwvf to functional-615837
	  Normal   Pulling    7m39s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m39s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m39s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    27s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     27s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-579rb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-615837/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:52:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gr9p7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gr9p7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-579rb to functional-615837
	  Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
	  Warning  Failed     4m55s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image load --daemon kicbase/echo-server:functional-615837 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-615837" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image load --daemon kicbase/echo-server:functional-615837 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-615837" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-615837
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image load --daemon kicbase/echo-server:functional-615837 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-615837" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-615837 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-615837 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-wkwvf" [51ee02da-9ddd-4188-b96d-9b3418215528] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-615837 -n functional-615837
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-02 07:01:41.595514645 +0000 UTC m=+1221.941794202
functional_test.go:1460: (dbg) Run:  kubectl --context functional-615837 describe po hello-node-75c85bcc94-wkwvf -n default
functional_test.go:1460: (dbg) kubectl --context functional-615837 describe po hello-node-75c85bcc94-wkwvf -n default:
Name:             hello-node-75c85bcc94-wkwvf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-615837/192.168.49.2
Start Time:       Thu, 02 Oct 2025 06:51:41 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24c96 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-24c96:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wkwvf to functional-615837
Normal   Pulling    6m58s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-615837 logs hello-node-75c85bcc94-wkwvf -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-615837 logs hello-node-75c85bcc94-wkwvf -n default: exit status 1 (97.049566ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-wkwvf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-615837 logs hello-node-75c85bcc94-wkwvf -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image save kicbase/echo-server:functional-615837 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1002 06:51:42.836241  317930 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:51:42.836449  317930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:51:42.836465  317930 out.go:374] Setting ErrFile to fd 2...
	I1002 06:51:42.836471  317930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:51:42.836748  317930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:51:42.837379  317930 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:51:42.837540  317930 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 06:51:42.838031  317930 cli_runner.go:164] Run: docker container inspect functional-615837 --format={{.State.Status}}
	I1002 06:51:42.855371  317930 ssh_runner.go:195] Run: systemctl --version
	I1002 06:51:42.855431  317930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-615837
	I1002 06:51:42.876200  317930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/functional-615837/id_rsa Username:docker}
	I1002 06:51:42.969689  317930 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1002 06:51:42.969764  317930 cache_images.go:254] Failed to load cached images for "functional-615837": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1002 06:51:42.969793  317930 cache_images.go:266] failed pushing to: functional-615837

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-615837
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image save --daemon kicbase/echo-server:functional-615837 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-615837
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-615837: exit status 1 (16.77947ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-615837

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-615837

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 service --namespace=default --https --url hello-node: exit status 115 (384.822399ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31242
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-615837 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 service hello-node --url --format={{.IP}}: exit status 115 (394.795344ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-615837 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 service hello-node --url: exit status 115 (394.5672ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31242
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-615837 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31242
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (508.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 stop --alsologtostderr -v 5: (36.98513781s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 start --wait true --alsologtostderr -v 5
E1002 07:09:25.127224  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:09:28.908151  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:11:41.264375  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:12:08.969347  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:14:28.909249  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-550225 start --wait true --alsologtostderr -v 5: exit status 80 (7m49.491441365s)

                                                
                                                
-- stdout --
	* [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:08:44.939810  341591 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:08:44.940011  341591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:08:44.940043  341591 out.go:374] Setting ErrFile to fd 2...
	I1002 07:08:44.940065  341591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:08:44.940373  341591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:08:44.940829  341591 out.go:368] Setting JSON to false
	I1002 07:08:44.941737  341591 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6676,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:08:44.941852  341591 start.go:140] virtualization:  
	I1002 07:08:44.945309  341591 out.go:179] * [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:08:44.949071  341591 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:08:44.949136  341591 notify.go:220] Checking for updates...
	I1002 07:08:44.954765  341591 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:08:44.957619  341591 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:44.960532  341591 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:08:44.963482  341591 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:08:44.966346  341591 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:08:44.969606  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:44.969708  341591 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:08:44.989812  341591 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:08:44.989931  341591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:08:45.116140  341591 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:08:45.103955411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:08:45.116266  341591 docker.go:318] overlay module found
	I1002 07:08:45.119605  341591 out.go:179] * Using the docker driver based on existing profile
	I1002 07:08:45.122721  341591 start.go:304] selected driver: docker
	I1002 07:08:45.122756  341591 start.go:924] validating driver "docker" against &{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:45.122900  341591 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:08:45.123044  341591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:08:45.249038  341591 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:08:45.234686313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:08:45.251229  341591 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:08:45.251295  341591 cni.go:84] Creating CNI manager for ""
	I1002 07:08:45.251506  341591 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:08:45.251808  341591 start.go:348] cluster config:
	{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:45.255266  341591 out.go:179] * Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	I1002 07:08:45.258893  341591 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:08:45.262396  341591 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:08:45.265430  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:45.265522  341591 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:08:45.265535  341591 cache.go:58] Caching tarball of preloaded images
	I1002 07:08:45.265608  341591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:08:45.265695  341591 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:08:45.265710  341591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:08:45.265874  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:45.291884  341591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:08:45.291911  341591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:08:45.291937  341591 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:08:45.291963  341591 start.go:360] acquireMachinesLock for ha-550225: {Name:mkc1f009b4f35f6b87d580d72d0a621c44a033f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:08:45.292028  341591 start.go:364] duration metric: took 44.932µs to acquireMachinesLock for "ha-550225"
	I1002 07:08:45.292049  341591 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:08:45.292061  341591 fix.go:54] fixHost starting: 
	I1002 07:08:45.292330  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:08:45.318814  341591 fix.go:112] recreateIfNeeded on ha-550225: state=Stopped err=<nil>
	W1002 07:08:45.318856  341591 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:08:45.330622  341591 out.go:252] * Restarting existing docker container for "ha-550225" ...
	I1002 07:08:45.330751  341591 cli_runner.go:164] Run: docker start ha-550225
	I1002 07:08:45.646890  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:08:45.667650  341591 kic.go:430] container "ha-550225" state is running.
	I1002 07:08:45.669709  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:45.694012  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:45.694609  341591 machine.go:93] provisionDockerMachine start ...
	I1002 07:08:45.694683  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:45.718481  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:45.718795  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:45.718805  341591 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:08:45.719510  341591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 07:08:48.850571  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:08:48.850596  341591 ubuntu.go:182] provisioning hostname "ha-550225"
	I1002 07:08:48.850671  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:48.868262  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:48.868584  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:48.868602  341591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225 && echo "ha-550225" | sudo tee /etc/hostname
	I1002 07:08:49.009524  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:08:49.009614  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.027738  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:49.028058  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:49.028089  341591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:08:49.159321  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:08:49.159347  341591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:08:49.159380  341591 ubuntu.go:190] setting up certificates
	I1002 07:08:49.159407  341591 provision.go:84] configureAuth start
	I1002 07:08:49.159473  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:49.177020  341591 provision.go:143] copyHostCerts
	I1002 07:08:49.177064  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:49.177102  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:08:49.177123  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:49.177214  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:08:49.177322  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:49.177346  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:08:49.177356  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:49.177386  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:08:49.177445  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:49.177477  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:08:49.177486  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:49.177513  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:08:49.177571  341591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225 san=[127.0.0.1 192.168.49.2 ha-550225 localhost minikube]
	I1002 07:08:49.408806  341591 provision.go:177] copyRemoteCerts
	I1002 07:08:49.408883  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:08:49.408933  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.427268  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:49.523125  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:08:49.523193  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:08:49.541524  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:08:49.541587  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:08:49.560307  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:08:49.560439  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:08:49.579034  341591 provision.go:87] duration metric: took 419.599802ms to configureAuth
	I1002 07:08:49.579123  341591 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:08:49.579377  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:49.579486  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.596818  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:49.597138  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:49.597160  341591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:08:49.914967  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:08:49.914989  341591 machine.go:96] duration metric: took 4.220366309s to provisionDockerMachine
	I1002 07:08:49.914999  341591 start.go:293] postStartSetup for "ha-550225" (driver="docker")
	I1002 07:08:49.915010  341591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:08:49.915065  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:08:49.915139  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.934272  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.032623  341591 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:08:50.036993  341591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:08:50.037025  341591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:08:50.037038  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:08:50.037102  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:08:50.037207  341591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:08:50.037223  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:08:50.037344  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:08:50.045768  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:50.065030  341591 start.go:296] duration metric: took 150.01442ms for postStartSetup
	I1002 07:08:50.065114  341591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:08:50.065165  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.083355  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.176451  341591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:08:50.181473  341591 fix.go:56] duration metric: took 4.889410348s for fixHost
	I1002 07:08:50.181541  341591 start.go:83] releasing machines lock for "ha-550225", held for 4.889504338s
	I1002 07:08:50.181637  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:50.200970  341591 ssh_runner.go:195] Run: cat /version.json
	I1002 07:08:50.201030  341591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:08:50.201094  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.201034  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.223487  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.226725  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.314949  341591 ssh_runner.go:195] Run: systemctl --version
	I1002 07:08:50.413766  341591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:08:50.452815  341591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:08:50.457414  341591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:08:50.457496  341591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:08:50.465709  341591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:08:50.465775  341591 start.go:495] detecting cgroup driver to use...
	I1002 07:08:50.465837  341591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:08:50.465897  341591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:08:50.481659  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:08:50.494377  341591 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:08:50.494539  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:08:50.510531  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:08:50.523730  341591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:08:50.636574  341591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:08:50.755906  341591 docker.go:234] disabling docker service ...
	I1002 07:08:50.756000  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:08:50.771446  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:08:50.785113  341591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:08:50.896624  341591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:08:51.014182  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:08:51.028269  341591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:08:51.042461  341591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:08:51.042584  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.051849  341591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:08:51.051966  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.061081  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.071350  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.080939  341591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:08:51.089739  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.099773  341591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.108596  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.118078  341591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:08:51.126369  341591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:08:51.134612  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:08:51.248761  341591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:08:51.375720  341591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:08:51.375791  341591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:08:51.380249  341591 start.go:563] Will wait 60s for crictl version
	I1002 07:08:51.380325  341591 ssh_runner.go:195] Run: which crictl
	I1002 07:08:51.384127  341591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:08:51.409087  341591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:08:51.409174  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:08:51.443563  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:08:51.476455  341591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:08:51.479290  341591 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:08:51.500260  341591 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:08:51.504889  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:08:51.515269  341591 kubeadm.go:883] updating cluster {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:08:51.515427  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:51.515487  341591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:08:51.554872  341591 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:08:51.554894  341591 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:08:51.554950  341591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:08:51.581938  341591 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:08:51.581962  341591 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:08:51.581972  341591 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:08:51.582066  341591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:08:51.582150  341591 ssh_runner.go:195] Run: crio config
	I1002 07:08:51.655227  341591 cni.go:84] Creating CNI manager for ""
	I1002 07:08:51.655292  341591 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:08:51.655338  341591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:08:51.655381  341591 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-550225 NodeName:ha-550225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:08:51.655547  341591 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-550225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:08:51.655604  341591 kube-vip.go:115] generating kube-vip config ...
	I1002 07:08:51.655689  341591 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:08:51.669633  341591 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:08:51.669809  341591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:08:51.669912  341591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:08:51.678877  341591 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:08:51.678968  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 07:08:51.687674  341591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:08:51.701824  341591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:08:51.715602  341591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1002 07:08:51.729053  341591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:08:51.742491  341591 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:08:51.746387  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:08:51.756532  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:08:51.864835  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:08:51.883513  341591 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.2
	I1002 07:08:51.883542  341591 certs.go:195] generating shared ca certs ...
	I1002 07:08:51.883559  341591 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:51.883827  341591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:08:51.883890  341591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:08:51.883904  341591 certs.go:257] generating profile certs ...
	I1002 07:08:51.884024  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:08:51.884065  341591 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa
	I1002 07:08:51.884101  341591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1002 07:08:52.084876  341591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa ...
	I1002 07:08:52.084913  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa: {Name:mk90c6f5aee289b034fa32e2cf7c0be9f53e848e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.085095  341591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa ...
	I1002 07:08:52.085111  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa: {Name:mk49689d29918ab68ff897f47cace9dfee85c265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.085191  341591 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt
	I1002 07:08:52.085343  341591 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key
	I1002 07:08:52.085487  341591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:08:52.085509  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:08:52.085529  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:08:52.085552  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:08:52.085570  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:08:52.085588  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:08:52.085612  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:08:52.085628  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:08:52.085643  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:08:52.085700  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:08:52.085732  341591 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:08:52.085744  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:08:52.085773  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:08:52.085797  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:08:52.085823  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:08:52.085877  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:52.085911  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.085930  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.085941  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.087620  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:08:52.117144  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:08:52.137577  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:08:52.157475  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:08:52.184553  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:08:52.204351  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:08:52.223284  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:08:52.243353  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:08:52.262671  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:08:52.281139  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:08:52.299758  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:08:52.317722  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:08:52.331012  341591 ssh_runner.go:195] Run: openssl version
	I1002 07:08:52.338277  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:08:52.346960  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.351159  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.351246  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.393022  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:08:52.401297  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:08:52.409980  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.414890  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.414990  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.456952  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:08:52.465241  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:08:52.474008  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.478217  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.478283  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.521200  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:08:52.529506  341591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:08:52.535033  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:08:52.580207  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:08:52.630699  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:08:52.691156  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:08:52.745220  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:08:52.803585  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:08:52.888339  341591 kubeadm.go:400] StartCluster: {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:52.888575  341591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:08:52.888690  341591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:08:52.933281  341591 cri.go:89] found id: "33fca634f948db8aca5186955624e23716df2846985727034e3329708ce55ca0"
	I1002 07:08:52.933358  341591 cri.go:89] found id: "d6201e9ebb1f7834795f1ed34af1c1531b7711bfef7ba9ec4f8b86cb19833552"
	I1002 07:08:52.933379  341591 cri.go:89] found id: "a09069dcbe74c144c7fb0aaabba0782111369a1c5d884db352906bac62c464a7"
	I1002 07:08:52.933401  341591 cri.go:89] found id: "ff6f36ad276da8f6ea87b58c1a6e4675a17751c812adf0bea3fb2ce4a3183dc0"
	I1002 07:08:52.933436  341591 cri.go:89] found id: "1360f133f64f29f11610a00ea639f98b5d2bbaae5d3ea5c0f099d47a97c24451"
	I1002 07:08:52.933462  341591 cri.go:89] found id: ""
	I1002 07:08:52.933564  341591 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 07:08:52.954557  341591 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:08:52Z" level=error msg="open /run/runc: no such file or directory"
	I1002 07:08:52.954731  341591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:08:52.966519  341591 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:08:52.966556  341591 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:08:52.966613  341591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:08:52.977313  341591 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:08:52.977720  341591 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-550225" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:52.977831  341591 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-550225" cluster setting kubeconfig missing "ha-550225" context setting]
	I1002 07:08:52.978102  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.978623  341591 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:08:52.979134  341591 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:08:52.979154  341591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:08:52.979160  341591 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:08:52.979165  341591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:08:52.979174  341591 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:08:52.979433  341591 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:08:52.979820  341591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:08:52.995042  341591 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:08:52.995069  341591 kubeadm.go:601] duration metric: took 28.506605ms to restartPrimaryControlPlane
	I1002 07:08:52.995093  341591 kubeadm.go:402] duration metric: took 106.757943ms to StartCluster
	I1002 07:08:52.995110  341591 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.995174  341591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:52.995752  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.995946  341591 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:08:52.995973  341591 start.go:241] waiting for startup goroutines ...
	I1002 07:08:52.995988  341591 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:08:52.996396  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:53.001878  341591 out.go:179] * Enabled addons: 
	I1002 07:08:53.004925  341591 addons.go:514] duration metric: took 8.918946ms for enable addons: enabled=[]
	I1002 07:08:53.004983  341591 start.go:246] waiting for cluster config update ...
	I1002 07:08:53.004993  341591 start.go:255] writing updated cluster config ...
	I1002 07:08:53.008718  341591 out.go:203] 
	I1002 07:08:53.012058  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:53.012193  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.015686  341591 out.go:179] * Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	I1002 07:08:53.018685  341591 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:08:53.021796  341591 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:08:53.024737  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:53.024783  341591 cache.go:58] Caching tarball of preloaded images
	I1002 07:08:53.024902  341591 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:08:53.024918  341591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:08:53.025045  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.025270  341591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:08:53.053242  341591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:08:53.053267  341591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:08:53.053282  341591 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:08:53.053306  341591 start.go:360] acquireMachinesLock for ha-550225-m02: {Name:mk11ef625bc214163cbeacdb736ddec4214a8374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:08:53.053365  341591 start.go:364] duration metric: took 39.27µs to acquireMachinesLock for "ha-550225-m02"
	I1002 07:08:53.053391  341591 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:08:53.053401  341591 fix.go:54] fixHost starting: m02
	I1002 07:08:53.053663  341591 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:08:53.082995  341591 fix.go:112] recreateIfNeeded on ha-550225-m02: state=Stopped err=<nil>
	W1002 07:08:53.083020  341591 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:08:53.086409  341591 out.go:252] * Restarting existing docker container for "ha-550225-m02" ...
	I1002 07:08:53.086490  341591 cli_runner.go:164] Run: docker start ha-550225-m02
	I1002 07:08:53.526547  341591 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:08:53.560540  341591 kic.go:430] container "ha-550225-m02" state is running.
	I1002 07:08:53.560941  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:53.589319  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.589569  341591 machine.go:93] provisionDockerMachine start ...
	I1002 07:08:53.589631  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:53.613911  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:53.614275  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:53.614286  341591 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:08:53.615331  341591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 07:08:56.845810  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:08:56.845831  341591 ubuntu.go:182] provisioning hostname "ha-550225-m02"
	I1002 07:08:56.845894  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:56.874342  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:56.874643  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:56.874653  341591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225-m02 && echo "ha-550225-m02" | sudo tee /etc/hostname
	I1002 07:08:57.125200  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:08:57.125348  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:57.175744  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:57.176048  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:57.176063  341591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:08:57.375895  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:08:57.375973  341591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:08:57.376006  341591 ubuntu.go:190] setting up certificates
	I1002 07:08:57.376047  341591 provision.go:84] configureAuth start
	I1002 07:08:57.376159  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:57.404649  341591 provision.go:143] copyHostCerts
	I1002 07:08:57.404689  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:57.404723  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:08:57.404730  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:57.404806  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:08:57.404883  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:57.404899  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:08:57.404903  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:57.404928  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:08:57.404966  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:57.404981  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:08:57.404985  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:57.405007  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:08:57.405049  341591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225-m02 san=[127.0.0.1 192.168.49.3 ha-550225-m02 localhost minikube]
	I1002 07:08:58.253352  341591 provision.go:177] copyRemoteCerts
	I1002 07:08:58.253471  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:08:58.253549  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:58.284716  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:58.445457  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:08:58.445522  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:08:58.470364  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:08:58.470427  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:08:58.499404  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:08:58.499467  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:08:58.532579  341591 provision.go:87] duration metric: took 1.156483399s to configureAuth
	I1002 07:08:58.532607  341591 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:08:58.532851  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:58.532977  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:58.555257  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:58.555589  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:58.555604  341591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:08:59.611219  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:08:59.611244  341591 machine.go:96] duration metric: took 6.021666332s to provisionDockerMachine
	I1002 07:08:59.611278  341591 start.go:293] postStartSetup for "ha-550225-m02" (driver="docker")
	I1002 07:08:59.611297  341591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:08:59.611400  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:08:59.611473  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.649812  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.756024  341591 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:08:59.760197  341591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:08:59.760226  341591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:08:59.760237  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:08:59.760299  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:08:59.760377  341591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:08:59.760384  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:08:59.760484  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:08:59.769466  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:59.791590  341591 start.go:296] duration metric: took 180.289185ms for postStartSetup
	I1002 07:08:59.791715  341591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:08:59.791794  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.812896  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.913229  341591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:08:59.919306  341591 fix.go:56] duration metric: took 6.865897009s for fixHost
	I1002 07:08:59.919329  341591 start.go:83] releasing machines lock for "ha-550225-m02", held for 6.865950129s
	I1002 07:08:59.919398  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:59.946647  341591 out.go:179] * Found network options:
	I1002 07:08:59.949695  341591 out.go:179]   - NO_PROXY=192.168.49.2
	W1002 07:08:59.952715  341591 proxy.go:120] fail to check proxy env: Error ip not in block
	W1002 07:08:59.952759  341591 proxy.go:120] fail to check proxy env: Error ip not in block
	I1002 07:08:59.952829  341591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:08:59.952894  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.953175  341591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:08:59.953233  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.989027  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.990560  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:09:00.478157  341591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:09:00.501356  341591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:09:00.501454  341591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:09:00.524313  341591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:09:00.524374  341591 start.go:495] detecting cgroup driver to use...
	I1002 07:09:00.524424  341591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:09:00.524542  341591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:09:00.551686  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:09:00.586292  341591 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:09:00.586360  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:09:00.619869  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:09:00.637822  341591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:09:01.096286  341591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:09:01.469209  341591 docker.go:234] disabling docker service ...
	I1002 07:09:01.469292  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:09:01.568628  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:09:01.594625  341591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:09:01.844380  341591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:09:02.076706  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:09:02.091901  341591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:09:02.109279  341591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:09:02.109364  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.122659  341591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:09:02.122751  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.137700  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.152110  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.170421  341591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:09:02.185373  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.201415  341591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.215850  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.226273  341591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:09:02.235058  341591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:09:02.244989  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:09:02.482152  341591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:10:32.816328  341591 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.334137072s)
	I1002 07:10:32.816356  341591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:10:32.816423  341591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:10:32.820364  341591 start.go:563] Will wait 60s for crictl version
	I1002 07:10:32.820431  341591 ssh_runner.go:195] Run: which crictl
	I1002 07:10:32.824000  341591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:10:32.850862  341591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:10:32.850953  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:10:32.880614  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:10:32.912245  341591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:10:32.915198  341591 out.go:179]   - env NO_PROXY=192.168.49.2
	I1002 07:10:32.918443  341591 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:10:32.933458  341591 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:10:32.937660  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:10:32.947835  341591 mustload.go:65] Loading cluster: ha-550225
	I1002 07:10:32.948074  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:10:32.948339  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:10:32.965455  341591 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:10:32.965737  341591 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.3
	I1002 07:10:32.965753  341591 certs.go:195] generating shared ca certs ...
	I1002 07:10:32.965768  341591 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:10:32.965883  341591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:10:32.965988  341591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:10:32.966005  341591 certs.go:257] generating profile certs ...
	I1002 07:10:32.966093  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:10:32.966164  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.e172f685
	I1002 07:10:32.966209  341591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:10:32.966223  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:10:32.966236  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:10:32.966258  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:10:32.966274  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:10:32.966287  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:10:32.966299  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:10:32.966316  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:10:32.966327  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:10:32.966380  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:10:32.966412  341591 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:10:32.966426  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:10:32.966450  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:10:32.966474  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:10:32.966495  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:10:32.966534  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:10:32.966563  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:32.966580  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:10:32.966591  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:10:32.966649  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:10:32.984090  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:10:33.079415  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1002 07:10:33.085346  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1002 07:10:33.094080  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1002 07:10:33.098124  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1002 07:10:33.106895  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1002 07:10:33.110488  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1002 07:10:33.119266  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1002 07:10:33.123712  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1002 07:10:33.133884  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1002 07:10:33.137901  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1002 07:10:33.146372  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1002 07:10:33.150238  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1002 07:10:33.158857  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:10:33.178733  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:10:33.198632  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:10:33.218076  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:10:33.238363  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:10:33.257196  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:10:33.276752  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:10:33.296959  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:10:33.315515  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:10:33.334382  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:10:33.353232  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:10:33.371930  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1002 07:10:33.386343  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1002 07:10:33.402145  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1002 07:10:33.416991  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1002 07:10:33.433404  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1002 07:10:33.447888  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1002 07:10:33.461804  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1002 07:10:33.478080  341591 ssh_runner.go:195] Run: openssl version
	I1002 07:10:33.486077  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:10:33.496093  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.500252  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.500323  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.542203  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:10:33.550474  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:10:33.559422  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.563475  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.563544  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.606638  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:10:33.614955  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:10:33.624760  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.629454  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.629532  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.670697  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:10:33.679136  341591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:10:33.683757  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:10:33.729404  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:10:33.775724  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:10:33.817095  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:10:33.859304  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:10:33.900718  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:10:33.942018  341591 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1002 07:10:33.942118  341591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:10:33.942147  341591 kube-vip.go:115] generating kube-vip config ...
	I1002 07:10:33.942211  341591 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:10:33.955152  341591 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:10:33.955209  341591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:10:33.955278  341591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:10:33.964060  341591 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:10:33.964146  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1002 07:10:33.972349  341591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 07:10:33.986955  341591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:10:34.000411  341591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:10:34.019944  341591 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:10:34.024237  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:10:34.035378  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:10:34.172194  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:10:34.188479  341591 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:10:34.188914  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:10:34.194079  341591 out.go:179] * Verifying Kubernetes components...
	I1002 07:10:34.196849  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:10:34.335762  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:10:34.350979  341591 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1002 07:10:34.351051  341591 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1002 07:10:34.351428  341591 node_ready.go:35] waiting up to 6m0s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:11:06.236659  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:11:06.237065  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1002 07:11:08.352628  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:10.352901  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:12.852094  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:14.852800  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:19.143807  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:12:19.144210  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:52046->192.168.49.2:8443: read: connection reset by peer
	W1002 07:12:21.352097  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:23.352198  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:25.352707  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:27.852697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:30.352903  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:32.852934  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:35.352921  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:37.852899  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:40.352147  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:45.017485  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:13:45.017917  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:59354->192.168.49.2:8443: read: connection reset by peer
	W1002 07:13:47.352022  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:49.352714  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:51.352825  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:53.852618  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:55.852865  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:58.351961  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:00.352833  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:02.852671  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:04.852832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:06.852923  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:09.352699  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:11.852644  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:14.352881  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:16.852748  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:19.352661  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:21.852776  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:23.852965  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:25.853064  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:38.355323  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	W1002 07:14:48.356581  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	I1002 07:14:50.705710  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:14:50.706028  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:34198->192.168.49.2:8443: read: connection reset by peer
	W1002 07:14:52.852642  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:55.352291  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:57.352649  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:59.852686  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:02.351992  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:04.352640  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:06.852688  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:09.351928  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:11.352599  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:13.352684  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:15.852672  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:17.852933  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:20.352697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:22.852904  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:25.352921  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:27.852663  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:30.352554  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:32.352752  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:34.352832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:36.852783  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:39.352648  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:41.352902  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:43.851962  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:46.352385  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:48.352592  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:50.352899  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:52.852880  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:55.352702  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:57.852560  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:59.852697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:01.852832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:04.352611  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:06.852632  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:08.852866  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:20.352850  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	W1002 07:16:30.353494  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	I1002 07:16:32.822894  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:16:32.823551  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:44364->192.168.49.2:8443: read: connection reset by peer
	I1002 07:16:34.352311  341591 node_ready.go:38] duration metric: took 6m0.000854058s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:16:34.356665  341591 out.go:203] 
	W1002 07:16:34.359815  341591 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:16:34.359839  341591 out.go:285] * 
	* 
	W1002 07:16:34.362170  341591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:16:34.365348  341591 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-550225 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-550225
helpers_test.go:243: (dbg) docker inspect ha-550225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	        "Created": "2025-10-02T07:02:30.539981852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341718,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:08:45.398672695Z",
	            "FinishedAt": "2025-10-02T07:08:44.591030685Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c-json.log",
	        "Name": "/ha-550225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-550225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-550225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	                "LowerDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-550225",
	                "Source": "/var/lib/docker/volumes/ha-550225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-550225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-550225",
	                "name.minikube.sigs.k8s.io": "ha-550225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c2d172050d987c718db772c5aba92de1dca5d0823f878bf48657984e81707ec",
	            "SandboxKey": "/var/run/docker/netns/8c2d172050d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-550225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:7c:4c:83:e8:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a294cab4b5d50d5f227902c62678f378fbede9275f1d54f0b3de7a1f36e1a0",
	                    "EndpointID": "d33c1aff4a1a0ea6be34d85bfad24dbdc7a27874c0cd3475808500db307a6e4e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-550225",
	                        "1c1f8ec53310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225: exit status 2 (318.148187ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m03_ha-550225-m02.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt ha-550225-m04:/home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp testdata/cp-test.txt ha-550225-m04:/home/docker/cp-test.txt                                                             │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m04.txt │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m04_ha-550225.txt                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225.txt                                                 │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node start m02 --alsologtostderr -v 5                                                                                      │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:08 UTC │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │ 02 Oct 25 07:08 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5                                                                                   │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:08:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:08:44.939810  341591 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:08:44.940011  341591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:08:44.940043  341591 out.go:374] Setting ErrFile to fd 2...
	I1002 07:08:44.940065  341591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:08:44.940373  341591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:08:44.940829  341591 out.go:368] Setting JSON to false
	I1002 07:08:44.941737  341591 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6676,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:08:44.941852  341591 start.go:140] virtualization:  
	I1002 07:08:44.945309  341591 out.go:179] * [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:08:44.949071  341591 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:08:44.949136  341591 notify.go:220] Checking for updates...
	I1002 07:08:44.954765  341591 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:08:44.957619  341591 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:44.960532  341591 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:08:44.963482  341591 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:08:44.966346  341591 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:08:44.969606  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:44.969708  341591 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:08:44.989812  341591 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:08:44.989931  341591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:08:45.116140  341591 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:08:45.103955411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:08:45.116266  341591 docker.go:318] overlay module found
	I1002 07:08:45.119605  341591 out.go:179] * Using the docker driver based on existing profile
	I1002 07:08:45.122721  341591 start.go:304] selected driver: docker
	I1002 07:08:45.122756  341591 start.go:924] validating driver "docker" against &{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:45.122900  341591 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:08:45.123044  341591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:08:45.249038  341591 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:08:45.234686313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:08:45.251229  341591 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:08:45.251295  341591 cni.go:84] Creating CNI manager for ""
	I1002 07:08:45.251506  341591 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:08:45.251808  341591 start.go:348] cluster config:
	{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:45.255266  341591 out.go:179] * Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	I1002 07:08:45.258893  341591 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:08:45.262396  341591 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:08:45.265430  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:45.265522  341591 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:08:45.265535  341591 cache.go:58] Caching tarball of preloaded images
	I1002 07:08:45.265608  341591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:08:45.265695  341591 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:08:45.265710  341591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:08:45.265874  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:45.291884  341591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:08:45.291911  341591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:08:45.291937  341591 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:08:45.291963  341591 start.go:360] acquireMachinesLock for ha-550225: {Name:mkc1f009b4f35f6b87d580d72d0a621c44a033f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:08:45.292028  341591 start.go:364] duration metric: took 44.932µs to acquireMachinesLock for "ha-550225"
	I1002 07:08:45.292049  341591 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:08:45.292061  341591 fix.go:54] fixHost starting: 
	I1002 07:08:45.292330  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:08:45.318814  341591 fix.go:112] recreateIfNeeded on ha-550225: state=Stopped err=<nil>
	W1002 07:08:45.318856  341591 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:08:45.330622  341591 out.go:252] * Restarting existing docker container for "ha-550225" ...
	I1002 07:08:45.330751  341591 cli_runner.go:164] Run: docker start ha-550225
	I1002 07:08:45.646890  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:08:45.667650  341591 kic.go:430] container "ha-550225" state is running.
	I1002 07:08:45.669709  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:45.694012  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:45.694609  341591 machine.go:93] provisionDockerMachine start ...
	I1002 07:08:45.694683  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:45.718481  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:45.718795  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:45.718805  341591 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:08:45.719510  341591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 07:08:48.850571  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:08:48.850596  341591 ubuntu.go:182] provisioning hostname "ha-550225"
	I1002 07:08:48.850671  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:48.868262  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:48.868584  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:48.868602  341591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225 && echo "ha-550225" | sudo tee /etc/hostname
	I1002 07:08:49.009524  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:08:49.009614  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.027738  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:49.028058  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:49.028089  341591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:08:49.159321  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:08:49.159347  341591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:08:49.159380  341591 ubuntu.go:190] setting up certificates
	I1002 07:08:49.159407  341591 provision.go:84] configureAuth start
	I1002 07:08:49.159473  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:49.177020  341591 provision.go:143] copyHostCerts
	I1002 07:08:49.177064  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:49.177102  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:08:49.177123  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:49.177214  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:08:49.177322  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:49.177346  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:08:49.177356  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:49.177386  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:08:49.177445  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:49.177477  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:08:49.177486  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:49.177513  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:08:49.177571  341591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225 san=[127.0.0.1 192.168.49.2 ha-550225 localhost minikube]
	I1002 07:08:49.408806  341591 provision.go:177] copyRemoteCerts
	I1002 07:08:49.408883  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:08:49.408933  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.427268  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:49.523125  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:08:49.523193  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:08:49.541524  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:08:49.541587  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:08:49.560307  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:08:49.560439  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:08:49.579034  341591 provision.go:87] duration metric: took 419.599802ms to configureAuth
	I1002 07:08:49.579123  341591 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:08:49.579377  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:49.579486  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.596818  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:49.597138  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:49.597160  341591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:08:49.914967  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:08:49.914989  341591 machine.go:96] duration metric: took 4.220366309s to provisionDockerMachine
	I1002 07:08:49.914999  341591 start.go:293] postStartSetup for "ha-550225" (driver="docker")
	I1002 07:08:49.915010  341591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:08:49.915065  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:08:49.915139  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.934272  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.032623  341591 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:08:50.036993  341591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:08:50.037025  341591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:08:50.037038  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:08:50.037102  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:08:50.037207  341591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:08:50.037223  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:08:50.037344  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:08:50.045768  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:50.065030  341591 start.go:296] duration metric: took 150.01442ms for postStartSetup
	I1002 07:08:50.065114  341591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:08:50.065165  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.083355  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.176451  341591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:08:50.181473  341591 fix.go:56] duration metric: took 4.889410348s for fixHost
	I1002 07:08:50.181541  341591 start.go:83] releasing machines lock for "ha-550225", held for 4.889504338s
	I1002 07:08:50.181637  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:50.200970  341591 ssh_runner.go:195] Run: cat /version.json
	I1002 07:08:50.201030  341591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:08:50.201094  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.201034  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.223487  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.226725  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.314949  341591 ssh_runner.go:195] Run: systemctl --version
	I1002 07:08:50.413766  341591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:08:50.452815  341591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:08:50.457414  341591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:08:50.457496  341591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:08:50.465709  341591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:08:50.465775  341591 start.go:495] detecting cgroup driver to use...
	I1002 07:08:50.465837  341591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:08:50.465897  341591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:08:50.481659  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:08:50.494377  341591 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:08:50.494539  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:08:50.510531  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:08:50.523730  341591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:08:50.636574  341591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:08:50.755906  341591 docker.go:234] disabling docker service ...
	I1002 07:08:50.756000  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:08:50.771446  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:08:50.785113  341591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:08:50.896624  341591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:08:51.014182  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:08:51.028269  341591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:08:51.042461  341591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:08:51.042584  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.051849  341591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:08:51.051966  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.061081  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.071350  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.080939  341591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:08:51.089739  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.099773  341591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.108596  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.118078  341591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:08:51.126369  341591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:08:51.134612  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:08:51.248761  341591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:08:51.375720  341591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:08:51.375791  341591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:08:51.380249  341591 start.go:563] Will wait 60s for crictl version
	I1002 07:08:51.380325  341591 ssh_runner.go:195] Run: which crictl
	I1002 07:08:51.384127  341591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:08:51.409087  341591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:08:51.409174  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:08:51.443563  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:08:51.476455  341591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:08:51.479290  341591 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:08:51.500260  341591 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:08:51.504889  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:08:51.515269  341591 kubeadm.go:883] updating cluster {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:08:51.515427  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:51.515487  341591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:08:51.554872  341591 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:08:51.554894  341591 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:08:51.554950  341591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:08:51.581938  341591 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:08:51.581962  341591 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:08:51.581972  341591 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:08:51.582066  341591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:08:51.582150  341591 ssh_runner.go:195] Run: crio config
	I1002 07:08:51.655227  341591 cni.go:84] Creating CNI manager for ""
	I1002 07:08:51.655292  341591 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:08:51.655338  341591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:08:51.655381  341591 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-550225 NodeName:ha-550225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:08:51.655547  341591 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-550225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:08:51.655604  341591 kube-vip.go:115] generating kube-vip config ...
	I1002 07:08:51.655689  341591 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:08:51.669633  341591 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:08:51.669809  341591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:08:51.669912  341591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:08:51.678877  341591 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:08:51.678968  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 07:08:51.687674  341591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:08:51.701824  341591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:08:51.715602  341591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1002 07:08:51.729053  341591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:08:51.742491  341591 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:08:51.746387  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:08:51.756532  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:08:51.864835  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:08:51.883513  341591 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.2
	I1002 07:08:51.883542  341591 certs.go:195] generating shared ca certs ...
	I1002 07:08:51.883559  341591 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:51.883827  341591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:08:51.883890  341591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:08:51.883904  341591 certs.go:257] generating profile certs ...
	I1002 07:08:51.884024  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:08:51.884065  341591 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa
	I1002 07:08:51.884101  341591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1002 07:08:52.084876  341591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa ...
	I1002 07:08:52.084913  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa: {Name:mk90c6f5aee289b034fa32e2cf7c0be9f53e848e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.085095  341591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa ...
	I1002 07:08:52.085111  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa: {Name:mk49689d29918ab68ff897f47cace9dfee85c265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.085191  341591 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt
	I1002 07:08:52.085343  341591 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key
	I1002 07:08:52.085487  341591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:08:52.085509  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:08:52.085529  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:08:52.085552  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:08:52.085570  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:08:52.085588  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:08:52.085612  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:08:52.085628  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:08:52.085643  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:08:52.085700  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:08:52.085732  341591 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:08:52.085744  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:08:52.085773  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:08:52.085797  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:08:52.085823  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:08:52.085877  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:52.085911  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.085930  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.085941  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.087620  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:08:52.117144  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:08:52.137577  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:08:52.157475  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:08:52.184553  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:08:52.204351  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:08:52.223284  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:08:52.243353  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:08:52.262671  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:08:52.281139  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:08:52.299758  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:08:52.317722  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:08:52.331012  341591 ssh_runner.go:195] Run: openssl version
	I1002 07:08:52.338277  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:08:52.346960  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.351159  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.351246  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.393022  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:08:52.401297  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:08:52.409980  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.414890  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.414990  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.456952  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:08:52.465241  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:08:52.474008  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.478217  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.478283  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.521200  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:08:52.529506  341591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:08:52.535033  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:08:52.580207  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:08:52.630699  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:08:52.691156  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:08:52.745220  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:08:52.803585  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:08:52.888339  341591 kubeadm.go:400] StartCluster: {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:52.888575  341591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:08:52.888690  341591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:08:52.933281  341591 cri.go:89] found id: "33fca634f948db8aca5186955624e23716df2846985727034e3329708ce55ca0"
	I1002 07:08:52.933358  341591 cri.go:89] found id: "d6201e9ebb1f7834795f1ed34af1c1531b7711bfef7ba9ec4f8b86cb19833552"
	I1002 07:08:52.933379  341591 cri.go:89] found id: "a09069dcbe74c144c7fb0aaabba0782111369a1c5d884db352906bac62c464a7"
	I1002 07:08:52.933401  341591 cri.go:89] found id: "ff6f36ad276da8f6ea87b58c1a6e4675a17751c812adf0bea3fb2ce4a3183dc0"
	I1002 07:08:52.933436  341591 cri.go:89] found id: "1360f133f64f29f11610a00ea639f98b5d2bbaae5d3ea5c0f099d47a97c24451"
	I1002 07:08:52.933462  341591 cri.go:89] found id: ""
	I1002 07:08:52.933564  341591 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 07:08:52.954557  341591 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:08:52Z" level=error msg="open /run/runc: no such file or directory"
	I1002 07:08:52.954731  341591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:08:52.966519  341591 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:08:52.966556  341591 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:08:52.966613  341591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:08:52.977313  341591 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:08:52.977720  341591 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-550225" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:52.977831  341591 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-550225" cluster setting kubeconfig missing "ha-550225" context setting]
	I1002 07:08:52.978102  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.978623  341591 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:08:52.979134  341591 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:08:52.979154  341591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:08:52.979160  341591 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:08:52.979165  341591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:08:52.979174  341591 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:08:52.979433  341591 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:08:52.979820  341591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:08:52.995042  341591 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:08:52.995069  341591 kubeadm.go:601] duration metric: took 28.506605ms to restartPrimaryControlPlane
	I1002 07:08:52.995093  341591 kubeadm.go:402] duration metric: took 106.757943ms to StartCluster
	I1002 07:08:52.995110  341591 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.995174  341591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:52.995752  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.995946  341591 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:08:52.995973  341591 start.go:241] waiting for startup goroutines ...
	I1002 07:08:52.995988  341591 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:08:52.996396  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:53.001878  341591 out.go:179] * Enabled addons: 
	I1002 07:08:53.004925  341591 addons.go:514] duration metric: took 8.918946ms for enable addons: enabled=[]
	I1002 07:08:53.004983  341591 start.go:246] waiting for cluster config update ...
	I1002 07:08:53.004993  341591 start.go:255] writing updated cluster config ...
	I1002 07:08:53.008718  341591 out.go:203] 
	I1002 07:08:53.012058  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:53.012193  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.015686  341591 out.go:179] * Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	I1002 07:08:53.018685  341591 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:08:53.021796  341591 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:08:53.024737  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:53.024783  341591 cache.go:58] Caching tarball of preloaded images
	I1002 07:08:53.024902  341591 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:08:53.024918  341591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:08:53.025045  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.025270  341591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:08:53.053242  341591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:08:53.053267  341591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:08:53.053282  341591 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:08:53.053306  341591 start.go:360] acquireMachinesLock for ha-550225-m02: {Name:mk11ef625bc214163cbeacdb736ddec4214a8374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:08:53.053365  341591 start.go:364] duration metric: took 39.27µs to acquireMachinesLock for "ha-550225-m02"
	I1002 07:08:53.053391  341591 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:08:53.053401  341591 fix.go:54] fixHost starting: m02
	I1002 07:08:53.053663  341591 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:08:53.082995  341591 fix.go:112] recreateIfNeeded on ha-550225-m02: state=Stopped err=<nil>
	W1002 07:08:53.083020  341591 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:08:53.086409  341591 out.go:252] * Restarting existing docker container for "ha-550225-m02" ...
	I1002 07:08:53.086490  341591 cli_runner.go:164] Run: docker start ha-550225-m02
	I1002 07:08:53.526547  341591 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:08:53.560540  341591 kic.go:430] container "ha-550225-m02" state is running.
	I1002 07:08:53.560941  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:53.589319  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.589569  341591 machine.go:93] provisionDockerMachine start ...
	I1002 07:08:53.589631  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:53.613911  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:53.614275  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:53.614286  341591 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:08:53.615331  341591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 07:08:56.845810  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:08:56.845831  341591 ubuntu.go:182] provisioning hostname "ha-550225-m02"
	I1002 07:08:56.845894  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:56.874342  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:56.874643  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:56.874653  341591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225-m02 && echo "ha-550225-m02" | sudo tee /etc/hostname
	I1002 07:08:57.125200  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:08:57.125348  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:57.175744  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:57.176048  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:57.176063  341591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:08:57.375895  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:08:57.375973  341591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:08:57.376006  341591 ubuntu.go:190] setting up certificates
	I1002 07:08:57.376047  341591 provision.go:84] configureAuth start
	I1002 07:08:57.376159  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:57.404649  341591 provision.go:143] copyHostCerts
	I1002 07:08:57.404689  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:57.404723  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:08:57.404730  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:57.404806  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:08:57.404883  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:57.404899  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:08:57.404903  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:57.404928  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:08:57.404966  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:57.404981  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:08:57.404985  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:57.405007  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:08:57.405049  341591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225-m02 san=[127.0.0.1 192.168.49.3 ha-550225-m02 localhost minikube]
	I1002 07:08:58.253352  341591 provision.go:177] copyRemoteCerts
	I1002 07:08:58.253471  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:08:58.253549  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:58.284716  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:58.445457  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:08:58.445522  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:08:58.470364  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:08:58.470427  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:08:58.499404  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:08:58.499467  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:08:58.532579  341591 provision.go:87] duration metric: took 1.156483399s to configureAuth
	I1002 07:08:58.532607  341591 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:08:58.532851  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:58.532977  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:58.555257  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:58.555589  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:58.555604  341591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:08:59.611219  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:08:59.611244  341591 machine.go:96] duration metric: took 6.021666332s to provisionDockerMachine
	I1002 07:08:59.611278  341591 start.go:293] postStartSetup for "ha-550225-m02" (driver="docker")
	I1002 07:08:59.611297  341591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:08:59.611400  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:08:59.611473  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.649812  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.756024  341591 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:08:59.760197  341591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:08:59.760226  341591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:08:59.760237  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:08:59.760299  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:08:59.760377  341591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:08:59.760384  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:08:59.760484  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:08:59.769466  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:59.791590  341591 start.go:296] duration metric: took 180.289185ms for postStartSetup
	I1002 07:08:59.791715  341591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:08:59.791794  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.812896  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.913229  341591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:08:59.919306  341591 fix.go:56] duration metric: took 6.865897009s for fixHost
	I1002 07:08:59.919329  341591 start.go:83] releasing machines lock for "ha-550225-m02", held for 6.865950129s
	I1002 07:08:59.919398  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:59.946647  341591 out.go:179] * Found network options:
	I1002 07:08:59.949695  341591 out.go:179]   - NO_PROXY=192.168.49.2
	W1002 07:08:59.952715  341591 proxy.go:120] fail to check proxy env: Error ip not in block
	W1002 07:08:59.952759  341591 proxy.go:120] fail to check proxy env: Error ip not in block
	I1002 07:08:59.952829  341591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:08:59.952894  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.953175  341591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:08:59.953233  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.989027  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.990560  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:09:00.478157  341591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:09:00.501356  341591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:09:00.501454  341591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:09:00.524313  341591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:09:00.524374  341591 start.go:495] detecting cgroup driver to use...
	I1002 07:09:00.524424  341591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:09:00.524542  341591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:09:00.551686  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:09:00.586292  341591 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:09:00.586360  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:09:00.619869  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:09:00.637822  341591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:09:01.096286  341591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:09:01.469209  341591 docker.go:234] disabling docker service ...
	I1002 07:09:01.469292  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:09:01.568628  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:09:01.594625  341591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:09:01.844380  341591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:09:02.076706  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:09:02.091901  341591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:09:02.109279  341591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:09:02.109364  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.122659  341591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:09:02.122751  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.137700  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.152110  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.170421  341591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:09:02.185373  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.201415  341591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.215850  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.226273  341591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:09:02.235058  341591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:09:02.244989  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:09:02.482152  341591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:10:32.816328  341591 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.334137072s)
	I1002 07:10:32.816356  341591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:10:32.816423  341591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:10:32.820364  341591 start.go:563] Will wait 60s for crictl version
	I1002 07:10:32.820431  341591 ssh_runner.go:195] Run: which crictl
	I1002 07:10:32.824000  341591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:10:32.850862  341591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:10:32.850953  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:10:32.880614  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:10:32.912245  341591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:10:32.915198  341591 out.go:179]   - env NO_PROXY=192.168.49.2
	I1002 07:10:32.918443  341591 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:10:32.933458  341591 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:10:32.937660  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:10:32.947835  341591 mustload.go:65] Loading cluster: ha-550225
	I1002 07:10:32.948074  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:10:32.948339  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:10:32.965455  341591 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:10:32.965737  341591 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.3
	I1002 07:10:32.965753  341591 certs.go:195] generating shared ca certs ...
	I1002 07:10:32.965768  341591 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:10:32.965883  341591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:10:32.965988  341591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:10:32.966005  341591 certs.go:257] generating profile certs ...
	I1002 07:10:32.966093  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:10:32.966164  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.e172f685
	I1002 07:10:32.966209  341591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:10:32.966223  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:10:32.966236  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:10:32.966258  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:10:32.966274  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:10:32.966287  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:10:32.966299  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:10:32.966316  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:10:32.966327  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:10:32.966380  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:10:32.966412  341591 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:10:32.966426  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:10:32.966450  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:10:32.966474  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:10:32.966495  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:10:32.966534  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:10:32.966563  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:32.966580  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:10:32.966591  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:10:32.966649  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:10:32.984090  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:10:33.079415  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1002 07:10:33.085346  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1002 07:10:33.094080  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1002 07:10:33.098124  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1002 07:10:33.106895  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1002 07:10:33.110488  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1002 07:10:33.119266  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1002 07:10:33.123712  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1002 07:10:33.133884  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1002 07:10:33.137901  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1002 07:10:33.146372  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1002 07:10:33.150238  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1002 07:10:33.158857  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:10:33.178733  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:10:33.198632  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:10:33.218076  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:10:33.238363  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:10:33.257196  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:10:33.276752  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:10:33.296959  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:10:33.315515  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:10:33.334382  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:10:33.353232  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:10:33.371930  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1002 07:10:33.386343  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1002 07:10:33.402145  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1002 07:10:33.416991  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1002 07:10:33.433404  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1002 07:10:33.447888  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1002 07:10:33.461804  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1002 07:10:33.478080  341591 ssh_runner.go:195] Run: openssl version
	I1002 07:10:33.486077  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:10:33.496093  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.500252  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.500323  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.542203  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:10:33.550474  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:10:33.559422  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.563475  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.563544  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.606638  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:10:33.614955  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:10:33.624760  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.629454  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.629532  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.670697  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:10:33.679136  341591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:10:33.683757  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:10:33.729404  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:10:33.775724  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:10:33.817095  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:10:33.859304  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:10:33.900718  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:10:33.942018  341591 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1002 07:10:33.942118  341591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:10:33.942147  341591 kube-vip.go:115] generating kube-vip config ...
	I1002 07:10:33.942211  341591 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:10:33.955152  341591 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:10:33.955209  341591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:10:33.955278  341591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:10:33.964060  341591 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:10:33.964146  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1002 07:10:33.972349  341591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 07:10:33.986955  341591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:10:34.000411  341591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:10:34.019944  341591 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:10:34.024237  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:10:34.035378  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:10:34.172194  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:10:34.188479  341591 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:10:34.188914  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:10:34.194079  341591 out.go:179] * Verifying Kubernetes components...
	I1002 07:10:34.196849  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:10:34.335762  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:10:34.350979  341591 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1002 07:10:34.351051  341591 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1002 07:10:34.351428  341591 node_ready.go:35] waiting up to 6m0s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:11:06.236659  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:11:06.237065  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1002 07:11:08.352628  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:10.352901  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:12.852094  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:14.852800  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:19.143807  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:12:19.144210  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:52046->192.168.49.2:8443: read: connection reset by peer
	W1002 07:12:21.352097  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:23.352198  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:25.352707  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:27.852697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:30.352903  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:32.852934  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:35.352921  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:37.852899  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:40.352147  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:45.017485  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:13:45.017917  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:59354->192.168.49.2:8443: read: connection reset by peer
	W1002 07:13:47.352022  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:49.352714  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:51.352825  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:53.852618  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:55.852865  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:58.351961  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:00.352833  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:02.852671  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:04.852832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:06.852923  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:09.352699  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:11.852644  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:14.352881  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:16.852748  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:19.352661  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:21.852776  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:23.852965  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:25.853064  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:38.355323  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	W1002 07:14:48.356581  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	I1002 07:14:50.705710  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:14:50.706028  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:34198->192.168.49.2:8443: read: connection reset by peer
	W1002 07:14:52.852642  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:55.352291  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:57.352649  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:59.852686  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:02.351992  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:04.352640  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:06.852688  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:09.351928  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:11.352599  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:13.352684  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:15.852672  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:17.852933  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:20.352697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:22.852904  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:25.352921  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:27.852663  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:30.352554  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:32.352752  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:34.352832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:36.852783  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:39.352648  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:41.352902  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:43.851962  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:46.352385  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:48.352592  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:50.352899  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:52.852880  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:55.352702  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:57.852560  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:59.852697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:01.852832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:04.352611  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:06.852632  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:08.852866  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:20.352850  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	W1002 07:16:30.353494  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	I1002 07:16:32.822894  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:16:32.823551  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:44364->192.168.49.2:8443: read: connection reset by peer
	I1002 07:16:34.352311  341591 node_ready.go:38] duration metric: took 6m0.000854058s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:16:34.356665  341591 out.go:203] 
	W1002 07:16:34.359815  341591 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:16:34.359839  341591 out.go:285] * 
	W1002 07:16:34.362170  341591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:16:34.365348  341591 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.079197127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.08556225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.086082362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.107512425Z" level=info msg="Created container 075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d: kube-system/kube-controller-manager-ha-550225/kube-controller-manager" id=e9671816-71ab-4ee2-9a2b-f2ddea4bdc9a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.108304924Z" level=info msg="Starting container: 075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d" id=019ec56d-600d-4a41-a942-abd9b0a4b5cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.110179289Z" level=info msg="Started container" PID=1236 containerID=075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d description=kube-system/kube-controller-manager-ha-550225/kube-controller-manager id=019ec56d-600d-4a41-a942-abd9b0a4b5cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c10db252af9dad7133c29cf3fd7ff82b0ebcd9783fb3ae1d2569c9b69373fb8
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.077756642Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=4d37191b-e380-430b-9019-cfb9dcd6f54d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.079249145Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=52cf1b86-8421-41d9-9bd0-29ca469613d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.080627537Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=914c8388-6f74-471c-aa31-3a90fd94f956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.080887618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.089577727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.090437329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.121966352Z" level=info msg="Created container ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=914c8388-6f74-471c-aa31-3a90fd94f956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.122937741Z" level=info msg="Starting container: ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f" id=f5515dda-06cd-465d-9126-0a5d2d0f75c5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.134703942Z" level=info msg="Started container" PID=1247 containerID=ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f description=kube-system/kube-apiserver-ha-550225/kube-apiserver id=f5515dda-06cd-465d-9126-0a5d2d0f75c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6915c27c6e4c56041d4460161b4b50ad554915297fd4510ea5142f073c63dcf8
	Oct 02 07:16:31 ha-550225 conmon[1244]: conmon ec59b9b67a698e5db189 <ninfo>: container 1247 exited with status 255
	Oct 02 07:16:31 ha-550225 crio[664]: time="2025-10-02T07:16:31.825737329Z" level=info msg="Stopping container: ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f (timeout: 30s)" id=41cc4e72-76db-457f-859f-5e5fe66d5076 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 07:16:31 ha-550225 crio[664]: time="2025-10-02T07:16:31.836349221Z" level=info msg="Stopped container ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=41cc4e72-76db-457f-859f-5e5fe66d5076 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 07:16:32 ha-550225 crio[664]: time="2025-10-02T07:16:32.207132978Z" level=info msg="Removing container: 7b6abe1f2f6e802787eb5442b81fb8a6b3fcefd828d59667468088d5032dd0c4" id=d30010dd-5488-4c1d-9b4d-6f59d8f62713 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:32 ha-550225 crio[664]: time="2025-10-02T07:16:32.215867806Z" level=info msg="Error loading conmon cgroup of container 7b6abe1f2f6e802787eb5442b81fb8a6b3fcefd828d59667468088d5032dd0c4: cgroup deleted" id=d30010dd-5488-4c1d-9b4d-6f59d8f62713 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:32 ha-550225 crio[664]: time="2025-10-02T07:16:32.218879511Z" level=info msg="Removed container 7b6abe1f2f6e802787eb5442b81fb8a6b3fcefd828d59667468088d5032dd0c4: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=d30010dd-5488-4c1d-9b4d-6f59d8f62713 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:32 ha-550225 conmon[1233]: conmon 075b15e6c74a52fc8235 <ninfo>: container 1236 exited with status 1
	Oct 02 07:16:33 ha-550225 crio[664]: time="2025-10-02T07:16:33.212373377Z" level=info msg="Removing container: a7d0e0a58f7b8248b82d9489ac4e72aa74556902886fc58d6212397adf27e207" id=ca09262e-435d-4b74-8729-ff01bba5fbce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:33 ha-550225 crio[664]: time="2025-10-02T07:16:33.219592613Z" level=info msg="Error loading conmon cgroup of container a7d0e0a58f7b8248b82d9489ac4e72aa74556902886fc58d6212397adf27e207: cgroup deleted" id=ca09262e-435d-4b74-8729-ff01bba5fbce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:33 ha-550225 crio[664]: time="2025-10-02T07:16:33.222685139Z" level=info msg="Removed container a7d0e0a58f7b8248b82d9489ac4e72aa74556902886fc58d6212397adf27e207: kube-system/kube-controller-manager-ha-550225/kube-controller-manager" id=ca09262e-435d-4b74-8729-ff01bba5fbce name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	ec59b9b67a698       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   25 seconds ago      Exited              kube-apiserver            6                   6915c27c6e4c5       kube-apiserver-ha-550225            kube-system
	075b15e6c74a5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   26 seconds ago      Exited              kube-controller-manager   7                   4c10db252af9d       kube-controller-manager-ha-550225   kube-system
	883d49fba5ac5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Running             etcd                      2                   b3ee9fc964046       etcd-ha-550225                      kube-system
	d6201e9ebb1f7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Exited              etcd                      1                   b3ee9fc964046       etcd-ha-550225                      kube-system
	a09069dcbe74c       27aa99ef07bb63db109cae7189f6029203a1ba86e8d201ca72eb836e3cdd0b43   7 minutes ago       Running             kube-vip                  0                   0cbc1c071aca4       kube-vip-ha-550225                  kube-system
	ff6f36ad276da       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            1                   356b386bea9bb       kube-scheduler-ha-550225            kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 06:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 06:49] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:03] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [883d49fba5ac5d237dfa6b26b5b95e98f640c5dea3f2599a3b517c0c8be55896] <==
	{"level":"warn","ts":"2025-10-02T07:16:32.119535Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889185,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-02T07:16:32.620223Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889185,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-02T07:16:33.120651Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889185,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-10-02T07:16:33.317832Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:33.317892Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:33.317914Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to 340e91ee989e8740 at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:33.317925Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to ae3c16a0ff0d2d6f at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:33.317958Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:33.317969Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-10-02T07:16:33.621495Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889185,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-02T07:16:33.854109Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"340e91ee989e8740","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-10-02T07:16:33.854144Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ae3c16a0ff0d2d6f","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-02T07:16:33.854189Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ae3c16a0ff0d2d6f","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-02T07:16:33.854192Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"340e91ee989e8740","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-10-02T07:16:34.121903Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889185,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-02T07:16:34.613128Z","caller":"etcdserver/v3_server.go:923","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-10-02T07:16:34.613219Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000850899s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-10-02T07:16:34.613252Z","caller":"traceutil/trace.go:172","msg":"trace[51641939] range","detail":"{range_begin:; range_end:; }","duration":"7.000901656s","start":"2025-10-02T07:16:27.612339Z","end":"2025-10-02T07:16:34.613241Z","steps":["trace[51641939] 'agreement among raft nodes before linearized reading'  (duration: 7.000848183s)"],"step_count":1}
	{"level":"error","ts":"2025-10-02T07:16:34.613285Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]non_learner ok\n[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2220\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2747\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3210\nnet/http.(*conn).serve\n\tnet/http/server.go:2092"}
	{"level":"info","ts":"2025-10-02T07:16:34.917802Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917854Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917878Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to 340e91ee989e8740 at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917892Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to ae3c16a0ff0d2d6f at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917931Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	
	
	==> etcd [d6201e9ebb1f7834795f1ed34af1c1531b7711bfef7ba9ec4f8b86cb19833552] <==
	{"level":"info","ts":"2025-10-02T07:14:08.631118Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631162Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631196Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:14:08.631205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:14:08.631187Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"340e91ee989e8740"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631246Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631303Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:14:08.631312Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:14:08.631281Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631330Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631404Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631429Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631449Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631462Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631473Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631483Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631503Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631522Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631535Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631547Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631563Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.635633Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T07:14:08.635736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:14:08.635777Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T07:14:08.635785Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-550225","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 07:16:35 up  1:59,  0 user,  load average: 0.23, 0.65, 1.20
	Linux ha-550225 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f] <==
	I1002 07:16:10.211958       1 server.go:150] Version: v1.34.1
	I1002 07:16:10.212068       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1002 07:16:11.752050       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1002 07:16:11.752133       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1002 07:16:11.752166       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1002 07:16:11.752200       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1002 07:16:11.752232       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1002 07:16:11.752261       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1002 07:16:11.752293       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1002 07:16:11.752324       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1002 07:16:11.752356       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1002 07:16:11.752386       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1002 07:16:11.752419       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1002 07:16:11.752463       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	I1002 07:16:11.788269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1002 07:16:11.797033       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:16:11.798587       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1002 07:16:11.811605       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:16:11.815343       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1002 07:16:11.815458       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1002 07:16:11.816365       1 instance.go:239] Using reconciler: lease
	W1002 07:16:11.818696       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:16:31.782202       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:16:31.790759       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1002 07:16:31.817391       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d] <==
	I1002 07:16:10.596311       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:16:12.050398       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 07:16:12.050434       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:16:12.052007       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 07:16:12.052116       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 07:16:12.052771       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 07:16:12.052830       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:16:32.827381       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [ff6f36ad276da8f6ea87b58c1a6e4675a17751c812adf0bea3fb2ce4a3183dc0] <==
	E1002 07:15:37.102073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:15:38.442601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:15:41.823331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:15:44.775785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:15:45.258574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:15:46.491372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:15:46.769593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:15:52.124898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:15:57.001159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:15:57.379525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:15:59.973932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:16:00.856989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:16:04.337932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:16:05.218671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:16:22.641657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:16:23.826431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:16:26.580558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:16:29.569675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:16:32.831141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53042->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:16:32.831282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53062->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:16:32.831376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53078->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:16:32.831476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53080->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:16:32.831576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:60646->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:16:32.831659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53114->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:16:33.912373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	
	
	==> kubelet <==
	Oct 02 07:16:33 ha-550225 kubelet[799]: E1002 07:16:33.633439     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:33 ha-550225 kubelet[799]: E1002 07:16:33.734681     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:33 ha-550225 kubelet[799]: E1002 07:16:33.835623     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:33 ha-550225 kubelet[799]: E1002 07:16:33.936418     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.037319     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.138808     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.240126     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.341595     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.442695     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.543841     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.644729     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.745867     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.846900     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:34 ha-550225 kubelet[799]: E1002 07:16:34.947703     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.049126     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.150756     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.251504     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.352841     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.453887     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.466260     799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-550225\" not found" node="ha-550225"
	Oct 02 07:16:35 ha-550225 kubelet[799]: I1002 07:16:35.466353     799 scope.go:117] "RemoveContainer" containerID="ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.466482     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-550225_kube-system(2528b61b042a52f2fce1b4e033501952)\"" pod="kube-system/kube-apiserver-ha-550225" podUID="2528b61b042a52f2fce1b4e033501952"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.555700     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.656990     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.757937     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225: exit status 2 (345.308659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-550225" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (508.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (2.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-550225 node delete m03 --alsologtostderr -v 5: exit status 83 (181.52276ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-550225-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-550225"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:16:36.255238  345133 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:16:36.256117  345133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:36.256155  345133 out.go:374] Setting ErrFile to fd 2...
	I1002 07:16:36.256175  345133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:36.256528  345133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:16:36.256893  345133 mustload.go:65] Loading cluster: ha-550225
	I1002 07:16:36.257405  345133 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:36.257927  345133 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:36.276867  345133 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:16:36.277213  345133 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:36.337992  345133 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:16:36.327579567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:36.338389  345133 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:36.355616  345133 host.go:66] Checking if "ha-550225-m02" exists ...
	I1002 07:16:36.356123  345133 cli_runner.go:164] Run: docker container inspect ha-550225-m03 --format={{.State.Status}}
	I1002 07:16:36.376797  345133 out.go:179] * The control-plane node ha-550225-m03 host is not running: state=Stopped
	I1002 07:16:36.379602  345133 out.go:179]   To start a cluster, run: "minikube start -p ha-550225"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-arm64 -p ha-550225 node delete m03 --alsologtostderr -v 5": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5: exit status 7 (508.655171ms)

                                                
                                                
-- stdout --
	ha-550225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-550225-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-550225-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-550225-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:16:36.435392  345189 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:16:36.435588  345189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:36.435662  345189 out.go:374] Setting ErrFile to fd 2...
	I1002 07:16:36.435683  345189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:36.435978  345189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:16:36.436202  345189 out.go:368] Setting JSON to false
	I1002 07:16:36.436275  345189 mustload.go:65] Loading cluster: ha-550225
	I1002 07:16:36.436312  345189 notify.go:220] Checking for updates...
	I1002 07:16:36.436800  345189 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:36.436852  345189 status.go:174] checking status of ha-550225 ...
	I1002 07:16:36.437765  345189 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:36.455773  345189 status.go:371] ha-550225 host status = "Running" (err=<nil>)
	I1002 07:16:36.455799  345189 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:16:36.456101  345189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:36.482925  345189 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:16:36.483257  345189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:36.483310  345189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:36.501878  345189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:36.596335  345189 ssh_runner.go:195] Run: systemctl --version
	I1002 07:16:36.602642  345189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:16:36.615317  345189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:36.672226  345189 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:16:36.662492854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:36.672787  345189 kubeconfig.go:125] found "ha-550225" server: "https://192.168.49.254:8443"
	I1002 07:16:36.672826  345189 api_server.go:166] Checking apiserver status ...
	I1002 07:16:36.672873  345189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:16:36.683033  345189 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:36.683060  345189 status.go:463] ha-550225 apiserver status = Running (err=<nil>)
	I1002 07:16:36.683072  345189 status.go:176] ha-550225 status: &{Name:ha-550225 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:16:36.683219  345189 status.go:174] checking status of ha-550225-m02 ...
	I1002 07:16:36.683531  345189 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:36.700447  345189 status.go:371] ha-550225-m02 host status = "Running" (err=<nil>)
	I1002 07:16:36.700472  345189 host.go:66] Checking if "ha-550225-m02" exists ...
	I1002 07:16:36.700767  345189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:36.717300  345189 host.go:66] Checking if "ha-550225-m02" exists ...
	I1002 07:16:36.717626  345189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:36.717688  345189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:36.734826  345189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:36.828772  345189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:16:36.843068  345189 kubeconfig.go:125] found "ha-550225" server: "https://192.168.49.254:8443"
	I1002 07:16:36.843169  345189 api_server.go:166] Checking apiserver status ...
	I1002 07:16:36.843226  345189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 07:16:36.853741  345189 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:36.853764  345189 status.go:463] ha-550225-m02 apiserver status = Running (err=<nil>)
	I1002 07:16:36.853774  345189 status.go:176] ha-550225-m02 status: &{Name:ha-550225-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:16:36.853791  345189 status.go:174] checking status of ha-550225-m03 ...
	I1002 07:16:36.854107  345189 cli_runner.go:164] Run: docker container inspect ha-550225-m03 --format={{.State.Status}}
	I1002 07:16:36.873113  345189 status.go:371] ha-550225-m03 host status = "Stopped" (err=<nil>)
	I1002 07:16:36.873143  345189 status.go:384] host is not running, skipping remaining checks
	I1002 07:16:36.873151  345189 status.go:176] ha-550225-m03 status: &{Name:ha-550225-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:16:36.873172  345189 status.go:174] checking status of ha-550225-m04 ...
	I1002 07:16:36.873489  345189 cli_runner.go:164] Run: docker container inspect ha-550225-m04 --format={{.State.Status}}
	I1002 07:16:36.890759  345189 status.go:371] ha-550225-m04 host status = "Stopped" (err=<nil>)
	I1002 07:16:36.890784  345189 status.go:384] host is not running, skipping remaining checks
	I1002 07:16:36.890792  345189 status.go:176] ha-550225-m04 status: &{Name:ha-550225-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-550225
helpers_test.go:243: (dbg) docker inspect ha-550225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	        "Created": "2025-10-02T07:02:30.539981852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341718,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:08:45.398672695Z",
	            "FinishedAt": "2025-10-02T07:08:44.591030685Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c-json.log",
	        "Name": "/ha-550225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-550225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-550225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	                "LowerDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-550225",
	                "Source": "/var/lib/docker/volumes/ha-550225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-550225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-550225",
	                "name.minikube.sigs.k8s.io": "ha-550225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c2d172050d987c718db772c5aba92de1dca5d0823f878bf48657984e81707ec",
	            "SandboxKey": "/var/run/docker/netns/8c2d172050d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-550225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:7c:4c:83:e8:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a294cab4b5d50d5f227902c62678f378fbede9275f1d54f0b3de7a1f36e1a0",
	                    "EndpointID": "d33c1aff4a1a0ea6be34d85bfad24dbdc7a27874c0cd3475808500db307a6e4e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-550225",
	                        "1c1f8ec53310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225: exit status 2 (322.647742ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt ha-550225-m04:/home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp testdata/cp-test.txt ha-550225-m04:/home/docker/cp-test.txt                                                             │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m04.txt │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m04_ha-550225.txt                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225.txt                                                 │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node start m02 --alsologtostderr -v 5                                                                                      │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:08 UTC │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │ 02 Oct 25 07:08 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5                                                                                   │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ node    │ ha-550225 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:08:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:08:44.939810  341591 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:08:44.940011  341591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:08:44.940043  341591 out.go:374] Setting ErrFile to fd 2...
	I1002 07:08:44.940065  341591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:08:44.940373  341591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:08:44.940829  341591 out.go:368] Setting JSON to false
	I1002 07:08:44.941737  341591 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6676,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:08:44.941852  341591 start.go:140] virtualization:  
	I1002 07:08:44.945309  341591 out.go:179] * [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:08:44.949071  341591 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:08:44.949136  341591 notify.go:220] Checking for updates...
	I1002 07:08:44.954765  341591 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:08:44.957619  341591 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:44.960532  341591 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:08:44.963482  341591 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:08:44.966346  341591 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:08:44.969606  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:44.969708  341591 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:08:44.989812  341591 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:08:44.989931  341591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:08:45.116140  341591 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:08:45.103955411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:08:45.116266  341591 docker.go:318] overlay module found
	I1002 07:08:45.119605  341591 out.go:179] * Using the docker driver based on existing profile
	I1002 07:08:45.122721  341591 start.go:304] selected driver: docker
	I1002 07:08:45.122756  341591 start.go:924] validating driver "docker" against &{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:45.122900  341591 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:08:45.123044  341591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:08:45.249038  341591 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:08:45.234686313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:08:45.251229  341591 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:08:45.251295  341591 cni.go:84] Creating CNI manager for ""
	I1002 07:08:45.251506  341591 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:08:45.251808  341591 start.go:348] cluster config:
	{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:45.255266  341591 out.go:179] * Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	I1002 07:08:45.258893  341591 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:08:45.262396  341591 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:08:45.265430  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:45.265522  341591 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:08:45.265535  341591 cache.go:58] Caching tarball of preloaded images
	I1002 07:08:45.265608  341591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:08:45.265695  341591 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:08:45.265710  341591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:08:45.265874  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:45.291884  341591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:08:45.291911  341591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:08:45.291937  341591 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:08:45.291963  341591 start.go:360] acquireMachinesLock for ha-550225: {Name:mkc1f009b4f35f6b87d580d72d0a621c44a033f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:08:45.292028  341591 start.go:364] duration metric: took 44.932µs to acquireMachinesLock for "ha-550225"
	I1002 07:08:45.292049  341591 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:08:45.292061  341591 fix.go:54] fixHost starting: 
	I1002 07:08:45.292330  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:08:45.318814  341591 fix.go:112] recreateIfNeeded on ha-550225: state=Stopped err=<nil>
	W1002 07:08:45.318856  341591 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:08:45.330622  341591 out.go:252] * Restarting existing docker container for "ha-550225" ...
	I1002 07:08:45.330751  341591 cli_runner.go:164] Run: docker start ha-550225
	I1002 07:08:45.646890  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:08:45.667650  341591 kic.go:430] container "ha-550225" state is running.
	I1002 07:08:45.669709  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:45.694012  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:45.694609  341591 machine.go:93] provisionDockerMachine start ...
	I1002 07:08:45.694683  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:45.718481  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:45.718795  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:45.718805  341591 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:08:45.719510  341591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 07:08:48.850571  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:08:48.850596  341591 ubuntu.go:182] provisioning hostname "ha-550225"
	I1002 07:08:48.850671  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:48.868262  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:48.868584  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:48.868602  341591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225 && echo "ha-550225" | sudo tee /etc/hostname
	I1002 07:08:49.009524  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:08:49.009614  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.027738  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:49.028058  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:49.028089  341591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:08:49.159321  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:08:49.159347  341591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:08:49.159380  341591 ubuntu.go:190] setting up certificates
	I1002 07:08:49.159407  341591 provision.go:84] configureAuth start
	I1002 07:08:49.159473  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:49.177020  341591 provision.go:143] copyHostCerts
	I1002 07:08:49.177064  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:49.177102  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:08:49.177123  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:49.177214  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:08:49.177322  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:49.177346  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:08:49.177356  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:49.177386  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:08:49.177445  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:49.177477  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:08:49.177486  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:49.177513  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:08:49.177571  341591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225 san=[127.0.0.1 192.168.49.2 ha-550225 localhost minikube]
	I1002 07:08:49.408806  341591 provision.go:177] copyRemoteCerts
	I1002 07:08:49.408883  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:08:49.408933  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.427268  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:49.523125  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:08:49.523193  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:08:49.541524  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:08:49.541587  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:08:49.560307  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:08:49.560439  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:08:49.579034  341591 provision.go:87] duration metric: took 419.599802ms to configureAuth
	I1002 07:08:49.579123  341591 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:08:49.579377  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:49.579486  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.596818  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:49.597138  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:49.597160  341591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:08:49.914967  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:08:49.914989  341591 machine.go:96] duration metric: took 4.220366309s to provisionDockerMachine
	I1002 07:08:49.914999  341591 start.go:293] postStartSetup for "ha-550225" (driver="docker")
	I1002 07:08:49.915010  341591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:08:49.915065  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:08:49.915139  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.934272  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.032623  341591 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:08:50.036993  341591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:08:50.037025  341591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:08:50.037038  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:08:50.037102  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:08:50.037207  341591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:08:50.037223  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:08:50.037344  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:08:50.045768  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:50.065030  341591 start.go:296] duration metric: took 150.01442ms for postStartSetup
	I1002 07:08:50.065114  341591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:08:50.065165  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.083355  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.176451  341591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:08:50.181473  341591 fix.go:56] duration metric: took 4.889410348s for fixHost
	I1002 07:08:50.181541  341591 start.go:83] releasing machines lock for "ha-550225", held for 4.889504338s
	I1002 07:08:50.181637  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:50.200970  341591 ssh_runner.go:195] Run: cat /version.json
	I1002 07:08:50.201030  341591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:08:50.201094  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.201034  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.223487  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.226725  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.314949  341591 ssh_runner.go:195] Run: systemctl --version
	I1002 07:08:50.413766  341591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:08:50.452815  341591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:08:50.457414  341591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:08:50.457496  341591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:08:50.465709  341591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:08:50.465775  341591 start.go:495] detecting cgroup driver to use...
	I1002 07:08:50.465837  341591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:08:50.465897  341591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:08:50.481659  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:08:50.494377  341591 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:08:50.494539  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:08:50.510531  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:08:50.523730  341591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:08:50.636574  341591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:08:50.755906  341591 docker.go:234] disabling docker service ...
	I1002 07:08:50.756000  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:08:50.771446  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:08:50.785113  341591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:08:50.896624  341591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:08:51.014182  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:08:51.028269  341591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:08:51.042461  341591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:08:51.042584  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.051849  341591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:08:51.051966  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.061081  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.071350  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.080939  341591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:08:51.089739  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.099773  341591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.108596  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.118078  341591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:08:51.126369  341591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:08:51.134612  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:08:51.248761  341591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:08:51.375720  341591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:08:51.375791  341591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:08:51.380249  341591 start.go:563] Will wait 60s for crictl version
	I1002 07:08:51.380325  341591 ssh_runner.go:195] Run: which crictl
	I1002 07:08:51.384127  341591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:08:51.409087  341591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:08:51.409174  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:08:51.443563  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:08:51.476455  341591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:08:51.479290  341591 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:08:51.500260  341591 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:08:51.504889  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:08:51.515269  341591 kubeadm.go:883] updating cluster {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:08:51.515427  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:51.515487  341591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:08:51.554872  341591 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:08:51.554894  341591 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:08:51.554950  341591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:08:51.581938  341591 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:08:51.581962  341591 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:08:51.581972  341591 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:08:51.582066  341591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:08:51.582150  341591 ssh_runner.go:195] Run: crio config
	I1002 07:08:51.655227  341591 cni.go:84] Creating CNI manager for ""
	I1002 07:08:51.655292  341591 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:08:51.655338  341591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:08:51.655381  341591 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-550225 NodeName:ha-550225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:08:51.655547  341591 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-550225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:08:51.655604  341591 kube-vip.go:115] generating kube-vip config ...
	I1002 07:08:51.655689  341591 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:08:51.669633  341591 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:08:51.669809  341591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:08:51.669912  341591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:08:51.678877  341591 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:08:51.678968  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 07:08:51.687674  341591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:08:51.701824  341591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:08:51.715602  341591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1002 07:08:51.729053  341591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:08:51.742491  341591 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:08:51.746387  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:08:51.756532  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:08:51.864835  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:08:51.883513  341591 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.2
	I1002 07:08:51.883542  341591 certs.go:195] generating shared ca certs ...
	I1002 07:08:51.883559  341591 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:51.883827  341591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:08:51.883890  341591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:08:51.883904  341591 certs.go:257] generating profile certs ...
	I1002 07:08:51.884024  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:08:51.884065  341591 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa
	I1002 07:08:51.884101  341591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1002 07:08:52.084876  341591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa ...
	I1002 07:08:52.084913  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa: {Name:mk90c6f5aee289b034fa32e2cf7c0be9f53e848e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.085095  341591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa ...
	I1002 07:08:52.085111  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa: {Name:mk49689d29918ab68ff897f47cace9dfee85c265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.085191  341591 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt
	I1002 07:08:52.085343  341591 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key
	I1002 07:08:52.085487  341591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:08:52.085509  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:08:52.085529  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:08:52.085552  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:08:52.085570  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:08:52.085588  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:08:52.085612  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:08:52.085628  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:08:52.085643  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:08:52.085700  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:08:52.085732  341591 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:08:52.085744  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:08:52.085773  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:08:52.085797  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:08:52.085823  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:08:52.085877  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:52.085911  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.085930  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.085941  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.087620  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:08:52.117144  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:08:52.137577  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:08:52.157475  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:08:52.184553  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:08:52.204351  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:08:52.223284  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:08:52.243353  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:08:52.262671  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:08:52.281139  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:08:52.299758  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:08:52.317722  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:08:52.331012  341591 ssh_runner.go:195] Run: openssl version
	I1002 07:08:52.338277  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:08:52.346960  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.351159  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.351246  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.393022  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:08:52.401297  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:08:52.409980  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.414890  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.414990  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.456952  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:08:52.465241  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:08:52.474008  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.478217  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.478283  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.521200  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:08:52.529506  341591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:08:52.535033  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:08:52.580207  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:08:52.630699  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:08:52.691156  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:08:52.745220  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:08:52.803585  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:08:52.888339  341591 kubeadm.go:400] StartCluster: {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:52.888575  341591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:08:52.888690  341591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:08:52.933281  341591 cri.go:89] found id: "33fca634f948db8aca5186955624e23716df2846985727034e3329708ce55ca0"
	I1002 07:08:52.933358  341591 cri.go:89] found id: "d6201e9ebb1f7834795f1ed34af1c1531b7711bfef7ba9ec4f8b86cb19833552"
	I1002 07:08:52.933379  341591 cri.go:89] found id: "a09069dcbe74c144c7fb0aaabba0782111369a1c5d884db352906bac62c464a7"
	I1002 07:08:52.933401  341591 cri.go:89] found id: "ff6f36ad276da8f6ea87b58c1a6e4675a17751c812adf0bea3fb2ce4a3183dc0"
	I1002 07:08:52.933436  341591 cri.go:89] found id: "1360f133f64f29f11610a00ea639f98b5d2bbaae5d3ea5c0f099d47a97c24451"
	I1002 07:08:52.933462  341591 cri.go:89] found id: ""
	I1002 07:08:52.933564  341591 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 07:08:52.954557  341591 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:08:52Z" level=error msg="open /run/runc: no such file or directory"
	I1002 07:08:52.954731  341591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:08:52.966519  341591 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:08:52.966556  341591 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:08:52.966613  341591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:08:52.977313  341591 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:08:52.977720  341591 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-550225" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:52.977831  341591 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-550225" cluster setting kubeconfig missing "ha-550225" context setting]
	I1002 07:08:52.978102  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.978623  341591 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:08:52.979134  341591 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:08:52.979154  341591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:08:52.979160  341591 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:08:52.979165  341591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:08:52.979174  341591 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:08:52.979433  341591 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:08:52.979820  341591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:08:52.995042  341591 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:08:52.995069  341591 kubeadm.go:601] duration metric: took 28.506605ms to restartPrimaryControlPlane
	I1002 07:08:52.995093  341591 kubeadm.go:402] duration metric: took 106.757943ms to StartCluster
	I1002 07:08:52.995110  341591 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.995174  341591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:52.995752  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.995946  341591 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:08:52.995973  341591 start.go:241] waiting for startup goroutines ...
	I1002 07:08:52.995988  341591 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:08:52.996396  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:53.001878  341591 out.go:179] * Enabled addons: 
	I1002 07:08:53.004925  341591 addons.go:514] duration metric: took 8.918946ms for enable addons: enabled=[]
	I1002 07:08:53.004983  341591 start.go:246] waiting for cluster config update ...
	I1002 07:08:53.004993  341591 start.go:255] writing updated cluster config ...
	I1002 07:08:53.008718  341591 out.go:203] 
	I1002 07:08:53.012058  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:53.012193  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.015686  341591 out.go:179] * Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	I1002 07:08:53.018685  341591 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:08:53.021796  341591 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:08:53.024737  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:53.024783  341591 cache.go:58] Caching tarball of preloaded images
	I1002 07:08:53.024902  341591 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:08:53.024918  341591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:08:53.025045  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.025270  341591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:08:53.053242  341591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:08:53.053267  341591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:08:53.053282  341591 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:08:53.053306  341591 start.go:360] acquireMachinesLock for ha-550225-m02: {Name:mk11ef625bc214163cbeacdb736ddec4214a8374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:08:53.053365  341591 start.go:364] duration metric: took 39.27µs to acquireMachinesLock for "ha-550225-m02"
	I1002 07:08:53.053391  341591 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:08:53.053401  341591 fix.go:54] fixHost starting: m02
	I1002 07:08:53.053663  341591 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:08:53.082995  341591 fix.go:112] recreateIfNeeded on ha-550225-m02: state=Stopped err=<nil>
	W1002 07:08:53.083020  341591 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:08:53.086409  341591 out.go:252] * Restarting existing docker container for "ha-550225-m02" ...
	I1002 07:08:53.086490  341591 cli_runner.go:164] Run: docker start ha-550225-m02
	I1002 07:08:53.526547  341591 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:08:53.560540  341591 kic.go:430] container "ha-550225-m02" state is running.
	I1002 07:08:53.560941  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:53.589319  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.589569  341591 machine.go:93] provisionDockerMachine start ...
	I1002 07:08:53.589631  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:53.613911  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:53.614275  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:53.614286  341591 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:08:53.615331  341591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 07:08:56.845810  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:08:56.845831  341591 ubuntu.go:182] provisioning hostname "ha-550225-m02"
	I1002 07:08:56.845894  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:56.874342  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:56.874643  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:56.874653  341591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225-m02 && echo "ha-550225-m02" | sudo tee /etc/hostname
	I1002 07:08:57.125200  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:08:57.125348  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:57.175744  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:57.176048  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:57.176063  341591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:08:57.375895  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:08:57.375973  341591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:08:57.376006  341591 ubuntu.go:190] setting up certificates
	I1002 07:08:57.376047  341591 provision.go:84] configureAuth start
	I1002 07:08:57.376159  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:57.404649  341591 provision.go:143] copyHostCerts
	I1002 07:08:57.404689  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:57.404723  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:08:57.404730  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:57.404806  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:08:57.404883  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:57.404899  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:08:57.404903  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:57.404928  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:08:57.404966  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:57.404981  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:08:57.404985  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:57.405007  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:08:57.405049  341591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225-m02 san=[127.0.0.1 192.168.49.3 ha-550225-m02 localhost minikube]
	I1002 07:08:58.253352  341591 provision.go:177] copyRemoteCerts
	I1002 07:08:58.253471  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:08:58.253549  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:58.284716  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:58.445457  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:08:58.445522  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:08:58.470364  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:08:58.470427  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:08:58.499404  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:08:58.499467  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:08:58.532579  341591 provision.go:87] duration metric: took 1.156483399s to configureAuth
	I1002 07:08:58.532607  341591 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:08:58.532851  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:58.532977  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:58.555257  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:58.555589  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:58.555604  341591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:08:59.611219  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:08:59.611244  341591 machine.go:96] duration metric: took 6.021666332s to provisionDockerMachine
	I1002 07:08:59.611278  341591 start.go:293] postStartSetup for "ha-550225-m02" (driver="docker")
	I1002 07:08:59.611297  341591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:08:59.611400  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:08:59.611473  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.649812  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.756024  341591 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:08:59.760197  341591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:08:59.760226  341591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:08:59.760237  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:08:59.760299  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:08:59.760377  341591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:08:59.760384  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:08:59.760484  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:08:59.769466  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:59.791590  341591 start.go:296] duration metric: took 180.289185ms for postStartSetup
	I1002 07:08:59.791715  341591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:08:59.791794  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.812896  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.913229  341591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:08:59.919306  341591 fix.go:56] duration metric: took 6.865897009s for fixHost
	I1002 07:08:59.919329  341591 start.go:83] releasing machines lock for "ha-550225-m02", held for 6.865950129s
	I1002 07:08:59.919398  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:59.946647  341591 out.go:179] * Found network options:
	I1002 07:08:59.949695  341591 out.go:179]   - NO_PROXY=192.168.49.2
	W1002 07:08:59.952715  341591 proxy.go:120] fail to check proxy env: Error ip not in block
	W1002 07:08:59.952759  341591 proxy.go:120] fail to check proxy env: Error ip not in block
	I1002 07:08:59.952829  341591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:08:59.952894  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.953175  341591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:08:59.953233  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.989027  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.990560  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:09:00.478157  341591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:09:00.501356  341591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:09:00.501454  341591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:09:00.524313  341591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:09:00.524374  341591 start.go:495] detecting cgroup driver to use...
	I1002 07:09:00.524424  341591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:09:00.524542  341591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:09:00.551686  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:09:00.586292  341591 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:09:00.586360  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:09:00.619869  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:09:00.637822  341591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:09:01.096286  341591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:09:01.469209  341591 docker.go:234] disabling docker service ...
	I1002 07:09:01.469292  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:09:01.568628  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:09:01.594625  341591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:09:01.844380  341591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:09:02.076706  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:09:02.091901  341591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:09:02.109279  341591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:09:02.109364  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.122659  341591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:09:02.122751  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.137700  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.152110  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.170421  341591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:09:02.185373  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.201415  341591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.215850  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.226273  341591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:09:02.235058  341591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:09:02.244989  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:09:02.482152  341591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:10:32.816328  341591 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.334137072s)
	I1002 07:10:32.816356  341591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:10:32.816423  341591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:10:32.820364  341591 start.go:563] Will wait 60s for crictl version
	I1002 07:10:32.820431  341591 ssh_runner.go:195] Run: which crictl
	I1002 07:10:32.824000  341591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:10:32.850862  341591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:10:32.850953  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:10:32.880614  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:10:32.912245  341591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:10:32.915198  341591 out.go:179]   - env NO_PROXY=192.168.49.2
	I1002 07:10:32.918443  341591 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:10:32.933458  341591 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:10:32.937660  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:10:32.947835  341591 mustload.go:65] Loading cluster: ha-550225
	I1002 07:10:32.948074  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:10:32.948339  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:10:32.965455  341591 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:10:32.965737  341591 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.3
	I1002 07:10:32.965753  341591 certs.go:195] generating shared ca certs ...
	I1002 07:10:32.965768  341591 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:10:32.965883  341591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:10:32.965988  341591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:10:32.966005  341591 certs.go:257] generating profile certs ...
	I1002 07:10:32.966093  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:10:32.966164  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.e172f685
	I1002 07:10:32.966209  341591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:10:32.966223  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:10:32.966236  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:10:32.966258  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:10:32.966274  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:10:32.966287  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:10:32.966299  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:10:32.966316  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:10:32.966327  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:10:32.966380  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:10:32.966412  341591 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:10:32.966426  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:10:32.966450  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:10:32.966474  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:10:32.966495  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:10:32.966534  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:10:32.966563  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:32.966580  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:10:32.966591  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:10:32.966649  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:10:32.984090  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:10:33.079415  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1002 07:10:33.085346  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1002 07:10:33.094080  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1002 07:10:33.098124  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1002 07:10:33.106895  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1002 07:10:33.110488  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1002 07:10:33.119266  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1002 07:10:33.123712  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1002 07:10:33.133884  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1002 07:10:33.137901  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1002 07:10:33.146372  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1002 07:10:33.150238  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1002 07:10:33.158857  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:10:33.178733  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:10:33.198632  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:10:33.218076  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:10:33.238363  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:10:33.257196  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:10:33.276752  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:10:33.296959  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:10:33.315515  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:10:33.334382  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:10:33.353232  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:10:33.371930  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1002 07:10:33.386343  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1002 07:10:33.402145  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1002 07:10:33.416991  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1002 07:10:33.433404  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1002 07:10:33.447888  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1002 07:10:33.461804  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1002 07:10:33.478080  341591 ssh_runner.go:195] Run: openssl version
	I1002 07:10:33.486077  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:10:33.496093  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.500252  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.500323  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.542203  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:10:33.550474  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:10:33.559422  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.563475  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.563544  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.606638  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:10:33.614955  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:10:33.624760  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.629454  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.629532  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.670697  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:10:33.679136  341591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:10:33.683757  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:10:33.729404  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:10:33.775724  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:10:33.817095  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:10:33.859304  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:10:33.900718  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:10:33.942018  341591 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1002 07:10:33.942118  341591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:10:33.942147  341591 kube-vip.go:115] generating kube-vip config ...
	I1002 07:10:33.942211  341591 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:10:33.955152  341591 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:10:33.955209  341591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:10:33.955278  341591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:10:33.964060  341591 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:10:33.964146  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1002 07:10:33.972349  341591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 07:10:33.986955  341591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:10:34.000411  341591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:10:34.019944  341591 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:10:34.024237  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:10:34.035378  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:10:34.172194  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:10:34.188479  341591 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:10:34.188914  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:10:34.194079  341591 out.go:179] * Verifying Kubernetes components...
	I1002 07:10:34.196849  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:10:34.335762  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:10:34.350979  341591 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1002 07:10:34.351051  341591 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1002 07:10:34.351428  341591 node_ready.go:35] waiting up to 6m0s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:11:06.236659  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:11:06.237065  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1002 07:11:08.352628  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:10.352901  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:12.852094  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:14.852800  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:19.143807  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:12:19.144210  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:52046->192.168.49.2:8443: read: connection reset by peer
	W1002 07:12:21.352097  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:23.352198  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:25.352707  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:27.852697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:30.352903  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:32.852934  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:35.352921  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:37.852899  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:40.352147  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:45.017485  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:13:45.017917  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:59354->192.168.49.2:8443: read: connection reset by peer
	W1002 07:13:47.352022  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:49.352714  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:51.352825  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:53.852618  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:55.852865  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:58.351961  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:00.352833  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:02.852671  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:04.852832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:06.852923  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:09.352699  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:11.852644  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:14.352881  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:16.852748  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:19.352661  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:21.852776  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:23.852965  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:25.853064  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:38.355323  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	W1002 07:14:48.356581  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	I1002 07:14:50.705710  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:14:50.706028  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:34198->192.168.49.2:8443: read: connection reset by peer
	W1002 07:14:52.852642  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:55.352291  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:57.352649  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:59.852686  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:02.351992  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:04.352640  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:06.852688  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:09.351928  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:11.352599  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:13.352684  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:15.852672  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:17.852933  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:20.352697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:22.852904  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:25.352921  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:27.852663  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:30.352554  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:32.352752  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:34.352832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:36.852783  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:39.352648  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:41.352902  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:43.851962  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:46.352385  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:48.352592  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:50.352899  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:52.852880  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:55.352702  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:57.852560  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:59.852697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:01.852832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:04.352611  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:06.852632  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:08.852866  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:20.352850  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	W1002 07:16:30.353494  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	I1002 07:16:32.822894  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:16:32.823551  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:44364->192.168.49.2:8443: read: connection reset by peer
	I1002 07:16:34.352311  341591 node_ready.go:38] duration metric: took 6m0.000854058s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:16:34.356665  341591 out.go:203] 
	W1002 07:16:34.359815  341591 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:16:34.359839  341591 out.go:285] * 
	W1002 07:16:34.362170  341591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:16:34.365348  341591 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.079197127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.08556225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.086082362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.107512425Z" level=info msg="Created container 075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d: kube-system/kube-controller-manager-ha-550225/kube-controller-manager" id=e9671816-71ab-4ee2-9a2b-f2ddea4bdc9a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.108304924Z" level=info msg="Starting container: 075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d" id=019ec56d-600d-4a41-a942-abd9b0a4b5cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.110179289Z" level=info msg="Started container" PID=1236 containerID=075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d description=kube-system/kube-controller-manager-ha-550225/kube-controller-manager id=019ec56d-600d-4a41-a942-abd9b0a4b5cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c10db252af9dad7133c29cf3fd7ff82b0ebcd9783fb3ae1d2569c9b69373fb8
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.077756642Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=4d37191b-e380-430b-9019-cfb9dcd6f54d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.079249145Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=52cf1b86-8421-41d9-9bd0-29ca469613d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.080627537Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=914c8388-6f74-471c-aa31-3a90fd94f956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.080887618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.089577727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.090437329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.121966352Z" level=info msg="Created container ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=914c8388-6f74-471c-aa31-3a90fd94f956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.122937741Z" level=info msg="Starting container: ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f" id=f5515dda-06cd-465d-9126-0a5d2d0f75c5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.134703942Z" level=info msg="Started container" PID=1247 containerID=ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f description=kube-system/kube-apiserver-ha-550225/kube-apiserver id=f5515dda-06cd-465d-9126-0a5d2d0f75c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6915c27c6e4c56041d4460161b4b50ad554915297fd4510ea5142f073c63dcf8
	Oct 02 07:16:31 ha-550225 conmon[1244]: conmon ec59b9b67a698e5db189 <ninfo>: container 1247 exited with status 255
	Oct 02 07:16:31 ha-550225 crio[664]: time="2025-10-02T07:16:31.825737329Z" level=info msg="Stopping container: ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f (timeout: 30s)" id=41cc4e72-76db-457f-859f-5e5fe66d5076 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 07:16:31 ha-550225 crio[664]: time="2025-10-02T07:16:31.836349221Z" level=info msg="Stopped container ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=41cc4e72-76db-457f-859f-5e5fe66d5076 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 07:16:32 ha-550225 crio[664]: time="2025-10-02T07:16:32.207132978Z" level=info msg="Removing container: 7b6abe1f2f6e802787eb5442b81fb8a6b3fcefd828d59667468088d5032dd0c4" id=d30010dd-5488-4c1d-9b4d-6f59d8f62713 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:32 ha-550225 crio[664]: time="2025-10-02T07:16:32.215867806Z" level=info msg="Error loading conmon cgroup of container 7b6abe1f2f6e802787eb5442b81fb8a6b3fcefd828d59667468088d5032dd0c4: cgroup deleted" id=d30010dd-5488-4c1d-9b4d-6f59d8f62713 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:32 ha-550225 crio[664]: time="2025-10-02T07:16:32.218879511Z" level=info msg="Removed container 7b6abe1f2f6e802787eb5442b81fb8a6b3fcefd828d59667468088d5032dd0c4: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=d30010dd-5488-4c1d-9b4d-6f59d8f62713 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:32 ha-550225 conmon[1233]: conmon 075b15e6c74a52fc8235 <ninfo>: container 1236 exited with status 1
	Oct 02 07:16:33 ha-550225 crio[664]: time="2025-10-02T07:16:33.212373377Z" level=info msg="Removing container: a7d0e0a58f7b8248b82d9489ac4e72aa74556902886fc58d6212397adf27e207" id=ca09262e-435d-4b74-8729-ff01bba5fbce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:33 ha-550225 crio[664]: time="2025-10-02T07:16:33.219592613Z" level=info msg="Error loading conmon cgroup of container a7d0e0a58f7b8248b82d9489ac4e72aa74556902886fc58d6212397adf27e207: cgroup deleted" id=ca09262e-435d-4b74-8729-ff01bba5fbce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:33 ha-550225 crio[664]: time="2025-10-02T07:16:33.222685139Z" level=info msg="Removed container a7d0e0a58f7b8248b82d9489ac4e72aa74556902886fc58d6212397adf27e207: kube-system/kube-controller-manager-ha-550225/kube-controller-manager" id=ca09262e-435d-4b74-8729-ff01bba5fbce name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	ec59b9b67a698       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   27 seconds ago      Exited              kube-apiserver            6                   6915c27c6e4c5       kube-apiserver-ha-550225            kube-system
	075b15e6c74a5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   28 seconds ago      Exited              kube-controller-manager   7                   4c10db252af9d       kube-controller-manager-ha-550225   kube-system
	883d49fba5ac5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Running             etcd                      2                   b3ee9fc964046       etcd-ha-550225                      kube-system
	d6201e9ebb1f7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Exited              etcd                      1                   b3ee9fc964046       etcd-ha-550225                      kube-system
	a09069dcbe74c       27aa99ef07bb63db109cae7189f6029203a1ba86e8d201ca72eb836e3cdd0b43   7 minutes ago       Running             kube-vip                  0                   0cbc1c071aca4       kube-vip-ha-550225                  kube-system
	ff6f36ad276da       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            1                   356b386bea9bb       kube-scheduler-ha-550225            kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 06:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 06:49] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:03] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [883d49fba5ac5d237dfa6b26b5b95e98f640c5dea3f2599a3b517c0c8be55896] <==
	{"level":"info","ts":"2025-10-02T07:16:33.317958Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:33.317969Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-10-02T07:16:33.621495Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889185,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-02T07:16:33.854109Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"340e91ee989e8740","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-10-02T07:16:33.854144Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ae3c16a0ff0d2d6f","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-02T07:16:33.854189Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ae3c16a0ff0d2d6f","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-02T07:16:33.854192Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"340e91ee989e8740","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-10-02T07:16:34.121903Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889185,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-02T07:16:34.613128Z","caller":"etcdserver/v3_server.go:923","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-10-02T07:16:34.613219Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000850899s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-10-02T07:16:34.613252Z","caller":"traceutil/trace.go:172","msg":"trace[51641939] range","detail":"{range_begin:; range_end:; }","duration":"7.000901656s","start":"2025-10-02T07:16:27.612339Z","end":"2025-10-02T07:16:34.613241Z","steps":["trace[51641939] 'agreement among raft nodes before linearized reading'  (duration: 7.000848183s)"],"step_count":1}
	{"level":"error","ts":"2025-10-02T07:16:34.613285Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]non_learner ok\n[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2220\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2747\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3210\nnet/http.(*conn).serve\n\tnet/http/server.go:2092"}
	{"level":"info","ts":"2025-10-02T07:16:34.917802Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917854Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917878Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to 340e91ee989e8740 at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917892Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to ae3c16a0ff0d2d6f at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:34.917931Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-10-02T07:16:36.497988Z","caller":"etcdserver/server.go:1814","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-550225 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-10-02T07:16:36.517442Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:36.517491Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:36.517513Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to 340e91ee989e8740 at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:36.517527Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to ae3c16a0ff0d2d6f at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:36.517555Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:36.517565Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	
	
	==> etcd [d6201e9ebb1f7834795f1ed34af1c1531b7711bfef7ba9ec4f8b86cb19833552] <==
	{"level":"info","ts":"2025-10-02T07:14:08.631118Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631162Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631196Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:14:08.631205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:14:08.631187Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"340e91ee989e8740"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631246Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631303Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:14:08.631312Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:14:08.631281Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631330Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631404Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631429Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631449Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631462Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631473Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631483Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631503Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631522Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631535Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631547Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631563Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.635633Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T07:14:08.635736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:14:08.635777Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T07:14:08.635785Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-550225","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 07:16:38 up  1:59,  0 user,  load average: 0.23, 0.65, 1.20
	Linux ha-550225 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f] <==
	I1002 07:16:10.211958       1 server.go:150] Version: v1.34.1
	I1002 07:16:10.212068       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1002 07:16:11.752050       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1002 07:16:11.752133       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1002 07:16:11.752166       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1002 07:16:11.752200       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1002 07:16:11.752232       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1002 07:16:11.752261       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1002 07:16:11.752293       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1002 07:16:11.752324       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1002 07:16:11.752356       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1002 07:16:11.752386       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1002 07:16:11.752419       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1002 07:16:11.752463       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	I1002 07:16:11.788269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1002 07:16:11.797033       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:16:11.798587       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1002 07:16:11.811605       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:16:11.815343       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1002 07:16:11.815458       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1002 07:16:11.816365       1 instance.go:239] Using reconciler: lease
	W1002 07:16:11.818696       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:16:31.782202       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:16:31.790759       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1002 07:16:31.817391       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d] <==
	I1002 07:16:10.596311       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:16:12.050398       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 07:16:12.050434       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:16:12.052007       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 07:16:12.052116       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 07:16:12.052771       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 07:16:12.052830       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:16:32.827381       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [ff6f36ad276da8f6ea87b58c1a6e4675a17751c812adf0bea3fb2ce4a3183dc0] <==
	E1002 07:15:37.102073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:15:38.442601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:15:41.823331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:15:44.775785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:15:45.258574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:15:46.491372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:15:46.769593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:15:52.124898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:15:57.001159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:15:57.379525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:15:59.973932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:16:00.856989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:16:04.337932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:16:05.218671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:16:22.641657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:16:23.826431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:16:26.580558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:16:29.569675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:16:32.831141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53042->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:16:32.831282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53062->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:16:32.831376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53078->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:16:32.831476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53080->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:16:32.831576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:60646->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:16:32.831659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53114->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:16:33.912373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	
	
	==> kubelet <==
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.656990     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.757937     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.858995     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:35 ha-550225 kubelet[799]: E1002 07:16:35.959866     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.061081     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.161995     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.263447     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.364739     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.465700     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.567067     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.668604     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.769943     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.870904     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:36 ha-550225 kubelet[799]: E1002 07:16:36.972186     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.073689     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.175179     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.278939     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.379883     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.480930     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.582249     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.682858     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.783755     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.884665     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:37 ha-550225 kubelet[799]: E1002 07:16:37.985618     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.086654     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225: exit status 2 (336.317576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-550225" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (2.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-550225" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-550225\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-550225\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-550225\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvid
ia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizat
ions\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-550225
helpers_test.go:243: (dbg) docker inspect ha-550225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	        "Created": "2025-10-02T07:02:30.539981852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341718,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:08:45.398672695Z",
	            "FinishedAt": "2025-10-02T07:08:44.591030685Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c-json.log",
	        "Name": "/ha-550225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-550225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-550225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	                "LowerDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-550225",
	                "Source": "/var/lib/docker/volumes/ha-550225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-550225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-550225",
	                "name.minikube.sigs.k8s.io": "ha-550225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c2d172050d987c718db772c5aba92de1dca5d0823f878bf48657984e81707ec",
	            "SandboxKey": "/var/run/docker/netns/8c2d172050d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-550225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:7c:4c:83:e8:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a294cab4b5d50d5f227902c62678f378fbede9275f1d54f0b3de7a1f36e1a0",
	                    "EndpointID": "d33c1aff4a1a0ea6be34d85bfad24dbdc7a27874c0cd3475808500db307a6e4e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-550225",
	                        "1c1f8ec53310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225: exit status 2 (305.104987ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt ha-550225-m04:/home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp testdata/cp-test.txt ha-550225-m04:/home/docker/cp-test.txt                                                             │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m04.txt │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m04_ha-550225.txt                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225.txt                                                 │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node start m02 --alsologtostderr -v 5                                                                                      │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:08 UTC │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │ 02 Oct 25 07:08 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5                                                                                   │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ node    │ ha-550225 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:08:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:08:44.939810  341591 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:08:44.940011  341591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:08:44.940043  341591 out.go:374] Setting ErrFile to fd 2...
	I1002 07:08:44.940065  341591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:08:44.940373  341591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:08:44.940829  341591 out.go:368] Setting JSON to false
	I1002 07:08:44.941737  341591 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6676,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:08:44.941852  341591 start.go:140] virtualization:  
	I1002 07:08:44.945309  341591 out.go:179] * [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:08:44.949071  341591 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:08:44.949136  341591 notify.go:220] Checking for updates...
	I1002 07:08:44.954765  341591 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:08:44.957619  341591 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:44.960532  341591 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:08:44.963482  341591 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:08:44.966346  341591 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:08:44.969606  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:44.969708  341591 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:08:44.989812  341591 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:08:44.989931  341591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:08:45.116140  341591 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:08:45.103955411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:08:45.116266  341591 docker.go:318] overlay module found
	I1002 07:08:45.119605  341591 out.go:179] * Using the docker driver based on existing profile
	I1002 07:08:45.122721  341591 start.go:304] selected driver: docker
	I1002 07:08:45.122756  341591 start.go:924] validating driver "docker" against &{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:45.122900  341591 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:08:45.123044  341591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:08:45.249038  341591 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:08:45.234686313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:08:45.251229  341591 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:08:45.251295  341591 cni.go:84] Creating CNI manager for ""
	I1002 07:08:45.251506  341591 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:08:45.251808  341591 start.go:348] cluster config:
	{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:45.255266  341591 out.go:179] * Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	I1002 07:08:45.258893  341591 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:08:45.262396  341591 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:08:45.265430  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:45.265522  341591 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:08:45.265535  341591 cache.go:58] Caching tarball of preloaded images
	I1002 07:08:45.265608  341591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:08:45.265695  341591 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:08:45.265710  341591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:08:45.265874  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:45.291884  341591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:08:45.291911  341591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:08:45.291937  341591 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:08:45.291963  341591 start.go:360] acquireMachinesLock for ha-550225: {Name:mkc1f009b4f35f6b87d580d72d0a621c44a033f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:08:45.292028  341591 start.go:364] duration metric: took 44.932µs to acquireMachinesLock for "ha-550225"
	I1002 07:08:45.292049  341591 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:08:45.292061  341591 fix.go:54] fixHost starting: 
	I1002 07:08:45.292330  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:08:45.318814  341591 fix.go:112] recreateIfNeeded on ha-550225: state=Stopped err=<nil>
	W1002 07:08:45.318856  341591 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:08:45.330622  341591 out.go:252] * Restarting existing docker container for "ha-550225" ...
	I1002 07:08:45.330751  341591 cli_runner.go:164] Run: docker start ha-550225
	I1002 07:08:45.646890  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:08:45.667650  341591 kic.go:430] container "ha-550225" state is running.
	I1002 07:08:45.669709  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:45.694012  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:45.694609  341591 machine.go:93] provisionDockerMachine start ...
	I1002 07:08:45.694683  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:45.718481  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:45.718795  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:45.718805  341591 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:08:45.719510  341591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 07:08:48.850571  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:08:48.850596  341591 ubuntu.go:182] provisioning hostname "ha-550225"
	I1002 07:08:48.850671  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:48.868262  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:48.868584  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:48.868602  341591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225 && echo "ha-550225" | sudo tee /etc/hostname
	I1002 07:08:49.009524  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:08:49.009614  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.027738  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:49.028058  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:49.028089  341591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:08:49.159321  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:08:49.159347  341591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:08:49.159380  341591 ubuntu.go:190] setting up certificates
	I1002 07:08:49.159407  341591 provision.go:84] configureAuth start
	I1002 07:08:49.159473  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:49.177020  341591 provision.go:143] copyHostCerts
	I1002 07:08:49.177064  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:49.177102  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:08:49.177123  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:49.177214  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:08:49.177322  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:49.177346  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:08:49.177356  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:49.177386  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:08:49.177445  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:49.177477  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:08:49.177486  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:49.177513  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:08:49.177571  341591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225 san=[127.0.0.1 192.168.49.2 ha-550225 localhost minikube]
	I1002 07:08:49.408806  341591 provision.go:177] copyRemoteCerts
	I1002 07:08:49.408883  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:08:49.408933  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.427268  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:49.523125  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:08:49.523193  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:08:49.541524  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:08:49.541587  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:08:49.560307  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:08:49.560439  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:08:49.579034  341591 provision.go:87] duration metric: took 419.599802ms to configureAuth
	I1002 07:08:49.579123  341591 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:08:49.579377  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:49.579486  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.596818  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:49.597138  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1002 07:08:49.597160  341591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:08:49.914967  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:08:49.914989  341591 machine.go:96] duration metric: took 4.220366309s to provisionDockerMachine
	I1002 07:08:49.914999  341591 start.go:293] postStartSetup for "ha-550225" (driver="docker")
	I1002 07:08:49.915010  341591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:08:49.915065  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:08:49.915139  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:49.934272  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.032623  341591 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:08:50.036993  341591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:08:50.037025  341591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:08:50.037038  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:08:50.037102  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:08:50.037207  341591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:08:50.037223  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:08:50.037344  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:08:50.045768  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:50.065030  341591 start.go:296] duration metric: took 150.01442ms for postStartSetup
	I1002 07:08:50.065114  341591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:08:50.065165  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.083355  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.176451  341591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:08:50.181473  341591 fix.go:56] duration metric: took 4.889410348s for fixHost
	I1002 07:08:50.181541  341591 start.go:83] releasing machines lock for "ha-550225", held for 4.889504338s
	I1002 07:08:50.181637  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:08:50.200970  341591 ssh_runner.go:195] Run: cat /version.json
	I1002 07:08:50.201030  341591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:08:50.201094  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.201034  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:08:50.223487  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.226725  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:08:50.314949  341591 ssh_runner.go:195] Run: systemctl --version
	I1002 07:08:50.413766  341591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:08:50.452815  341591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:08:50.457414  341591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:08:50.457496  341591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:08:50.465709  341591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:08:50.465775  341591 start.go:495] detecting cgroup driver to use...
	I1002 07:08:50.465837  341591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:08:50.465897  341591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:08:50.481659  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:08:50.494377  341591 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:08:50.494539  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:08:50.510531  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:08:50.523730  341591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:08:50.636574  341591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:08:50.755906  341591 docker.go:234] disabling docker service ...
	I1002 07:08:50.756000  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:08:50.771446  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:08:50.785113  341591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:08:50.896624  341591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:08:51.014182  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:08:51.028269  341591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:08:51.042461  341591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:08:51.042584  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.051849  341591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:08:51.051966  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.061081  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.071350  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.080939  341591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:08:51.089739  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.099773  341591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.108596  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:08:51.118078  341591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:08:51.126369  341591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:08:51.134612  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:08:51.248761  341591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:08:51.375720  341591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:08:51.375791  341591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:08:51.380249  341591 start.go:563] Will wait 60s for crictl version
	I1002 07:08:51.380325  341591 ssh_runner.go:195] Run: which crictl
	I1002 07:08:51.384127  341591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:08:51.409087  341591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:08:51.409174  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:08:51.443563  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:08:51.476455  341591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:08:51.479290  341591 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:08:51.500260  341591 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:08:51.504889  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:08:51.515269  341591 kubeadm.go:883] updating cluster {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:08:51.515427  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:51.515487  341591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:08:51.554872  341591 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:08:51.554894  341591 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:08:51.554950  341591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:08:51.581938  341591 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:08:51.581962  341591 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:08:51.581972  341591 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:08:51.582066  341591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:08:51.582150  341591 ssh_runner.go:195] Run: crio config
	I1002 07:08:51.655227  341591 cni.go:84] Creating CNI manager for ""
	I1002 07:08:51.655292  341591 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:08:51.655338  341591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:08:51.655381  341591 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-550225 NodeName:ha-550225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:08:51.655547  341591 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-550225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:08:51.655604  341591 kube-vip.go:115] generating kube-vip config ...
	I1002 07:08:51.655689  341591 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:08:51.669633  341591 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:08:51.669809  341591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:08:51.669912  341591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:08:51.678877  341591 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:08:51.678968  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 07:08:51.687674  341591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:08:51.701824  341591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:08:51.715602  341591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1002 07:08:51.729053  341591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:08:51.742491  341591 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:08:51.746387  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:08:51.756532  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:08:51.864835  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:08:51.883513  341591 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.2
	I1002 07:08:51.883542  341591 certs.go:195] generating shared ca certs ...
	I1002 07:08:51.883559  341591 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:51.883827  341591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:08:51.883890  341591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:08:51.883904  341591 certs.go:257] generating profile certs ...
	I1002 07:08:51.884024  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:08:51.884065  341591 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa
	I1002 07:08:51.884101  341591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1002 07:08:52.084876  341591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa ...
	I1002 07:08:52.084913  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa: {Name:mk90c6f5aee289b034fa32e2cf7c0be9f53e848e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.085095  341591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa ...
	I1002 07:08:52.085111  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa: {Name:mk49689d29918ab68ff897f47cace9dfee85c265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.085191  341591 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt.bf5122aa -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt
	I1002 07:08:52.085343  341591 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key
	I1002 07:08:52.085487  341591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:08:52.085509  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:08:52.085529  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:08:52.085552  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:08:52.085570  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:08:52.085588  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:08:52.085612  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:08:52.085628  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:08:52.085643  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:08:52.085700  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:08:52.085732  341591 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:08:52.085744  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:08:52.085773  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:08:52.085797  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:08:52.085823  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:08:52.085877  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:52.085911  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.085930  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.085941  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.087620  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:08:52.117144  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:08:52.137577  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:08:52.157475  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:08:52.184553  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:08:52.204351  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:08:52.223284  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:08:52.243353  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:08:52.262671  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:08:52.281139  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:08:52.299758  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:08:52.317722  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:08:52.331012  341591 ssh_runner.go:195] Run: openssl version
	I1002 07:08:52.338277  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:08:52.346960  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.351159  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.351246  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:08:52.393022  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:08:52.401297  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:08:52.409980  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.414890  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.414990  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:08:52.456952  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:08:52.465241  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:08:52.474008  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.478217  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.478283  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:08:52.521200  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:08:52.529506  341591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:08:52.535033  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:08:52.580207  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:08:52.630699  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:08:52.691156  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:08:52.745220  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:08:52.803585  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:08:52.888339  341591 kubeadm.go:400] StartCluster: {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:08:52.888575  341591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:08:52.888690  341591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:08:52.933281  341591 cri.go:89] found id: "33fca634f948db8aca5186955624e23716df2846985727034e3329708ce55ca0"
	I1002 07:08:52.933358  341591 cri.go:89] found id: "d6201e9ebb1f7834795f1ed34af1c1531b7711bfef7ba9ec4f8b86cb19833552"
	I1002 07:08:52.933379  341591 cri.go:89] found id: "a09069dcbe74c144c7fb0aaabba0782111369a1c5d884db352906bac62c464a7"
	I1002 07:08:52.933401  341591 cri.go:89] found id: "ff6f36ad276da8f6ea87b58c1a6e4675a17751c812adf0bea3fb2ce4a3183dc0"
	I1002 07:08:52.933436  341591 cri.go:89] found id: "1360f133f64f29f11610a00ea639f98b5d2bbaae5d3ea5c0f099d47a97c24451"
	I1002 07:08:52.933462  341591 cri.go:89] found id: ""
	I1002 07:08:52.933564  341591 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 07:08:52.954557  341591 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:08:52Z" level=error msg="open /run/runc: no such file or directory"
	I1002 07:08:52.954731  341591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:08:52.966519  341591 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:08:52.966556  341591 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:08:52.966613  341591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:08:52.977313  341591 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:08:52.977720  341591 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-550225" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:52.977831  341591 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-550225" cluster setting kubeconfig missing "ha-550225" context setting]
	I1002 07:08:52.978102  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.978623  341591 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:08:52.979134  341591 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:08:52.979154  341591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:08:52.979160  341591 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:08:52.979165  341591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:08:52.979174  341591 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:08:52.979433  341591 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:08:52.979820  341591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:08:52.995042  341591 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:08:52.995069  341591 kubeadm.go:601] duration metric: took 28.506605ms to restartPrimaryControlPlane
	I1002 07:08:52.995093  341591 kubeadm.go:402] duration metric: took 106.757943ms to StartCluster
	I1002 07:08:52.995110  341591 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.995174  341591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:08:52.995752  341591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:08:52.995946  341591 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:08:52.995973  341591 start.go:241] waiting for startup goroutines ...
	I1002 07:08:52.995988  341591 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:08:52.996396  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:53.001878  341591 out.go:179] * Enabled addons: 
	I1002 07:08:53.004925  341591 addons.go:514] duration metric: took 8.918946ms for enable addons: enabled=[]
	I1002 07:08:53.004983  341591 start.go:246] waiting for cluster config update ...
	I1002 07:08:53.004993  341591 start.go:255] writing updated cluster config ...
	I1002 07:08:53.008718  341591 out.go:203] 
	I1002 07:08:53.012058  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:53.012193  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.015686  341591 out.go:179] * Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	I1002 07:08:53.018685  341591 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:08:53.021796  341591 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:08:53.024737  341591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:08:53.024783  341591 cache.go:58] Caching tarball of preloaded images
	I1002 07:08:53.024902  341591 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:08:53.024918  341591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:08:53.025045  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.025270  341591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:08:53.053242  341591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:08:53.053267  341591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:08:53.053282  341591 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:08:53.053306  341591 start.go:360] acquireMachinesLock for ha-550225-m02: {Name:mk11ef625bc214163cbeacdb736ddec4214a8374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:08:53.053365  341591 start.go:364] duration metric: took 39.27µs to acquireMachinesLock for "ha-550225-m02"
	I1002 07:08:53.053391  341591 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:08:53.053401  341591 fix.go:54] fixHost starting: m02
	I1002 07:08:53.053663  341591 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:08:53.082995  341591 fix.go:112] recreateIfNeeded on ha-550225-m02: state=Stopped err=<nil>
	W1002 07:08:53.083020  341591 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:08:53.086409  341591 out.go:252] * Restarting existing docker container for "ha-550225-m02" ...
	I1002 07:08:53.086490  341591 cli_runner.go:164] Run: docker start ha-550225-m02
	I1002 07:08:53.526547  341591 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:08:53.560540  341591 kic.go:430] container "ha-550225-m02" state is running.
	I1002 07:08:53.560941  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:53.589319  341591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:08:53.589569  341591 machine.go:93] provisionDockerMachine start ...
	I1002 07:08:53.589631  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:53.613911  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:53.614275  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:53.614286  341591 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:08:53.615331  341591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 07:08:56.845810  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:08:56.845831  341591 ubuntu.go:182] provisioning hostname "ha-550225-m02"
	I1002 07:08:56.845894  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:56.874342  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:56.874643  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:56.874653  341591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225-m02 && echo "ha-550225-m02" | sudo tee /etc/hostname
	I1002 07:08:57.125200  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:08:57.125348  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:57.175744  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:57.176048  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:57.176063  341591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:08:57.375895  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:08:57.375973  341591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:08:57.376006  341591 ubuntu.go:190] setting up certificates
	I1002 07:08:57.376047  341591 provision.go:84] configureAuth start
	I1002 07:08:57.376159  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:57.404649  341591 provision.go:143] copyHostCerts
	I1002 07:08:57.404689  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:57.404723  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:08:57.404730  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:08:57.404806  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:08:57.404883  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:57.404899  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:08:57.404903  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:08:57.404928  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:08:57.404966  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:57.404981  341591 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:08:57.404985  341591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:08:57.405007  341591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:08:57.405049  341591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225-m02 san=[127.0.0.1 192.168.49.3 ha-550225-m02 localhost minikube]
	I1002 07:08:58.253352  341591 provision.go:177] copyRemoteCerts
	I1002 07:08:58.253471  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:08:58.253549  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:58.284716  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:58.445457  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:08:58.445522  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:08:58.470364  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:08:58.470427  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:08:58.499404  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:08:58.499467  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:08:58.532579  341591 provision.go:87] duration metric: took 1.156483399s to configureAuth
	I1002 07:08:58.532607  341591 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:08:58.532851  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:08:58.532977  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:58.555257  341591 main.go:141] libmachine: Using SSH client type: native
	I1002 07:08:58.555589  341591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1002 07:08:58.555604  341591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:08:59.611219  341591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:08:59.611244  341591 machine.go:96] duration metric: took 6.021666332s to provisionDockerMachine
	I1002 07:08:59.611278  341591 start.go:293] postStartSetup for "ha-550225-m02" (driver="docker")
	I1002 07:08:59.611297  341591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:08:59.611400  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:08:59.611473  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.649812  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.756024  341591 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:08:59.760197  341591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:08:59.760226  341591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:08:59.760237  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:08:59.760299  341591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:08:59.760377  341591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:08:59.760384  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:08:59.760484  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:08:59.769466  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:08:59.791590  341591 start.go:296] duration metric: took 180.289185ms for postStartSetup
	I1002 07:08:59.791715  341591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:08:59.791794  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.812896  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.913229  341591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:08:59.919306  341591 fix.go:56] duration metric: took 6.865897009s for fixHost
	I1002 07:08:59.919329  341591 start.go:83] releasing machines lock for "ha-550225-m02", held for 6.865950129s
	I1002 07:08:59.919398  341591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:08:59.946647  341591 out.go:179] * Found network options:
	I1002 07:08:59.949695  341591 out.go:179]   - NO_PROXY=192.168.49.2
	W1002 07:08:59.952715  341591 proxy.go:120] fail to check proxy env: Error ip not in block
	W1002 07:08:59.952759  341591 proxy.go:120] fail to check proxy env: Error ip not in block
	I1002 07:08:59.952829  341591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:08:59.952894  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.953175  341591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:08:59.953233  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:08:59.989027  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:08:59.990560  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:09:00.478157  341591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:09:00.501356  341591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:09:00.501454  341591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:09:00.524313  341591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:09:00.524374  341591 start.go:495] detecting cgroup driver to use...
	I1002 07:09:00.524424  341591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:09:00.524542  341591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:09:00.551686  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:09:00.586292  341591 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:09:00.586360  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:09:00.619869  341591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:09:00.637822  341591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:09:01.096286  341591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:09:01.469209  341591 docker.go:234] disabling docker service ...
	I1002 07:09:01.469292  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:09:01.568628  341591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:09:01.594625  341591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:09:01.844380  341591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:09:02.076706  341591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:09:02.091901  341591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:09:02.109279  341591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:09:02.109364  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.122659  341591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:09:02.122751  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.137700  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.152110  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.170421  341591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:09:02.185373  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.201415  341591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.215850  341591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:09:02.226273  341591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:09:02.235058  341591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:09:02.244989  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:09:02.482152  341591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:10:32.816328  341591 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.334137072s)
	I1002 07:10:32.816356  341591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:10:32.816423  341591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:10:32.820364  341591 start.go:563] Will wait 60s for crictl version
	I1002 07:10:32.820431  341591 ssh_runner.go:195] Run: which crictl
	I1002 07:10:32.824000  341591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:10:32.850862  341591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:10:32.850953  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:10:32.880614  341591 ssh_runner.go:195] Run: crio --version
	I1002 07:10:32.912245  341591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:10:32.915198  341591 out.go:179]   - env NO_PROXY=192.168.49.2
	I1002 07:10:32.918443  341591 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:10:32.933458  341591 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:10:32.937660  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:10:32.947835  341591 mustload.go:65] Loading cluster: ha-550225
	I1002 07:10:32.948074  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:10:32.948339  341591 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:10:32.965455  341591 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:10:32.965737  341591 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.3
	I1002 07:10:32.965753  341591 certs.go:195] generating shared ca certs ...
	I1002 07:10:32.965768  341591 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:10:32.965883  341591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:10:32.965988  341591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:10:32.966005  341591 certs.go:257] generating profile certs ...
	I1002 07:10:32.966093  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:10:32.966164  341591 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.e172f685
	I1002 07:10:32.966209  341591 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:10:32.966223  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:10:32.966236  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:10:32.966258  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:10:32.966274  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:10:32.966287  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:10:32.966299  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:10:32.966316  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:10:32.966327  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:10:32.966380  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:10:32.966412  341591 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:10:32.966426  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:10:32.966450  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:10:32.966474  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:10:32.966495  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:10:32.966534  341591 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:10:32.966563  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:32.966580  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:10:32.966591  341591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:10:32.966649  341591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:10:32.984090  341591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:10:33.079415  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1002 07:10:33.085346  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1002 07:10:33.094080  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1002 07:10:33.098124  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1002 07:10:33.106895  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1002 07:10:33.110488  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1002 07:10:33.119266  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1002 07:10:33.123712  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1002 07:10:33.133884  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1002 07:10:33.137901  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1002 07:10:33.146372  341591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1002 07:10:33.150238  341591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1002 07:10:33.158857  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:10:33.178733  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:10:33.198632  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:10:33.218076  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:10:33.238363  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:10:33.257196  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:10:33.276752  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:10:33.296959  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:10:33.315515  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:10:33.334382  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:10:33.353232  341591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:10:33.371930  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1002 07:10:33.386343  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1002 07:10:33.402145  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1002 07:10:33.416991  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1002 07:10:33.433404  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1002 07:10:33.447888  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1002 07:10:33.461804  341591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1002 07:10:33.478080  341591 ssh_runner.go:195] Run: openssl version
	I1002 07:10:33.486077  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:10:33.496093  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.500252  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.500323  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:10:33.542203  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:10:33.550474  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:10:33.559422  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.563475  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.563544  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:10:33.606638  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:10:33.614955  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:10:33.624760  341591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.629454  341591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.629532  341591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:10:33.670697  341591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:10:33.679136  341591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:10:33.683757  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:10:33.729404  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:10:33.775724  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:10:33.817095  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:10:33.859304  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:10:33.900718  341591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:10:33.942018  341591 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1002 07:10:33.942118  341591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:10:33.942147  341591 kube-vip.go:115] generating kube-vip config ...
	I1002 07:10:33.942211  341591 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:10:33.955152  341591 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:10:33.955209  341591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:10:33.955278  341591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:10:33.964060  341591 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:10:33.964146  341591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1002 07:10:33.972349  341591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 07:10:33.986955  341591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:10:34.000411  341591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:10:34.019944  341591 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:10:34.024237  341591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:10:34.035378  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:10:34.172194  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:10:34.188479  341591 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:10:34.188914  341591 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:10:34.194079  341591 out.go:179] * Verifying Kubernetes components...
	I1002 07:10:34.196849  341591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:10:34.335762  341591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:10:34.350979  341591 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1002 07:10:34.351051  341591 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1002 07:10:34.351428  341591 node_ready.go:35] waiting up to 6m0s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:11:06.236659  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:11:06.237065  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W1002 07:11:08.352628  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:10.352901  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:12.852094  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:11:14.852800  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:12:19.143807  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:12:19.144210  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:52046->192.168.49.2:8443: read: connection reset by peer
	W1002 07:12:21.352097  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:23.352198  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:25.352707  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:27.852697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:30.352903  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:32.852934  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:35.352921  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:37.852899  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:12:40.352147  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:13:45.017485  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:13:45.017917  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:59354->192.168.49.2:8443: read: connection reset by peer
	W1002 07:13:47.352022  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:49.352714  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:51.352825  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:53.852618  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:55.852865  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:13:58.351961  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:00.352833  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:02.852671  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:04.852832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:06.852923  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:09.352699  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:11.852644  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:14.352881  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:16.852748  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:19.352661  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:21.852776  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:23.852965  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:25.853064  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:38.355323  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	W1002 07:14:48.356581  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	I1002 07:14:50.705710  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:14:50.706028  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:34198->192.168.49.2:8443: read: connection reset by peer
	W1002 07:14:52.852642  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:55.352291  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:57.352649  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:14:59.852686  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:02.351992  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:04.352640  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:06.852688  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:09.351928  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:11.352599  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:13.352684  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:15.852672  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:17.852933  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:20.352697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:22.852904  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:25.352921  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:27.852663  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:30.352554  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:32.352752  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:34.352832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:36.852783  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:39.352648  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:41.352902  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:43.851962  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:46.352385  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:48.352592  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:50.352899  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:52.852880  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:55.352702  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:57.852560  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:15:59.852697  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:01.852832  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:04.352611  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:06.852632  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:08.852866  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:16:20.352850  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	W1002 07:16:30.353494  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": net/http: TLS handshake timeout
	I1002 07:16:32.822894  341591 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02"
	W1002 07:16:32.823551  341591 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:44364->192.168.49.2:8443: read: connection reset by peer
	I1002 07:16:34.352311  341591 node_ready.go:38] duration metric: took 6m0.000854058s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:16:34.356665  341591 out.go:203] 
	W1002 07:16:34.359815  341591 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 07:16:34.359839  341591 out.go:285] * 
	W1002 07:16:34.362170  341591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:16:34.365348  341591 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.079197127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.08556225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.086082362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.107512425Z" level=info msg="Created container 075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d: kube-system/kube-controller-manager-ha-550225/kube-controller-manager" id=e9671816-71ab-4ee2-9a2b-f2ddea4bdc9a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.108304924Z" level=info msg="Starting container: 075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d" id=019ec56d-600d-4a41-a942-abd9b0a4b5cf name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:16:09 ha-550225 crio[664]: time="2025-10-02T07:16:09.110179289Z" level=info msg="Started container" PID=1236 containerID=075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d description=kube-system/kube-controller-manager-ha-550225/kube-controller-manager id=019ec56d-600d-4a41-a942-abd9b0a4b5cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c10db252af9dad7133c29cf3fd7ff82b0ebcd9783fb3ae1d2569c9b69373fb8
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.077756642Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=4d37191b-e380-430b-9019-cfb9dcd6f54d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.079249145Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=52cf1b86-8421-41d9-9bd0-29ca469613d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.080627537Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=914c8388-6f74-471c-aa31-3a90fd94f956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.080887618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.089577727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.090437329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.121966352Z" level=info msg="Created container ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=914c8388-6f74-471c-aa31-3a90fd94f956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.122937741Z" level=info msg="Starting container: ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f" id=f5515dda-06cd-465d-9126-0a5d2d0f75c5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:16:10 ha-550225 crio[664]: time="2025-10-02T07:16:10.134703942Z" level=info msg="Started container" PID=1247 containerID=ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f description=kube-system/kube-apiserver-ha-550225/kube-apiserver id=f5515dda-06cd-465d-9126-0a5d2d0f75c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6915c27c6e4c56041d4460161b4b50ad554915297fd4510ea5142f073c63dcf8
	Oct 02 07:16:31 ha-550225 conmon[1244]: conmon ec59b9b67a698e5db189 <ninfo>: container 1247 exited with status 255
	Oct 02 07:16:31 ha-550225 crio[664]: time="2025-10-02T07:16:31.825737329Z" level=info msg="Stopping container: ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f (timeout: 30s)" id=41cc4e72-76db-457f-859f-5e5fe66d5076 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 07:16:31 ha-550225 crio[664]: time="2025-10-02T07:16:31.836349221Z" level=info msg="Stopped container ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=41cc4e72-76db-457f-859f-5e5fe66d5076 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 07:16:32 ha-550225 crio[664]: time="2025-10-02T07:16:32.207132978Z" level=info msg="Removing container: 7b6abe1f2f6e802787eb5442b81fb8a6b3fcefd828d59667468088d5032dd0c4" id=d30010dd-5488-4c1d-9b4d-6f59d8f62713 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:32 ha-550225 crio[664]: time="2025-10-02T07:16:32.215867806Z" level=info msg="Error loading conmon cgroup of container 7b6abe1f2f6e802787eb5442b81fb8a6b3fcefd828d59667468088d5032dd0c4: cgroup deleted" id=d30010dd-5488-4c1d-9b4d-6f59d8f62713 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:32 ha-550225 crio[664]: time="2025-10-02T07:16:32.218879511Z" level=info msg="Removed container 7b6abe1f2f6e802787eb5442b81fb8a6b3fcefd828d59667468088d5032dd0c4: kube-system/kube-apiserver-ha-550225/kube-apiserver" id=d30010dd-5488-4c1d-9b4d-6f59d8f62713 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:32 ha-550225 conmon[1233]: conmon 075b15e6c74a52fc8235 <ninfo>: container 1236 exited with status 1
	Oct 02 07:16:33 ha-550225 crio[664]: time="2025-10-02T07:16:33.212373377Z" level=info msg="Removing container: a7d0e0a58f7b8248b82d9489ac4e72aa74556902886fc58d6212397adf27e207" id=ca09262e-435d-4b74-8729-ff01bba5fbce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:33 ha-550225 crio[664]: time="2025-10-02T07:16:33.219592613Z" level=info msg="Error loading conmon cgroup of container a7d0e0a58f7b8248b82d9489ac4e72aa74556902886fc58d6212397adf27e207: cgroup deleted" id=ca09262e-435d-4b74-8729-ff01bba5fbce name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 07:16:33 ha-550225 crio[664]: time="2025-10-02T07:16:33.222685139Z" level=info msg="Removed container a7d0e0a58f7b8248b82d9489ac4e72aa74556902886fc58d6212397adf27e207: kube-system/kube-controller-manager-ha-550225/kube-controller-manager" id=ca09262e-435d-4b74-8729-ff01bba5fbce name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	ec59b9b67a698       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   29 seconds ago      Exited              kube-apiserver            6                   6915c27c6e4c5       kube-apiserver-ha-550225            kube-system
	075b15e6c74a5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   30 seconds ago      Exited              kube-controller-manager   7                   4c10db252af9d       kube-controller-manager-ha-550225   kube-system
	883d49fba5ac5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Running             etcd                      2                   b3ee9fc964046       etcd-ha-550225                      kube-system
	d6201e9ebb1f7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Exited              etcd                      1                   b3ee9fc964046       etcd-ha-550225                      kube-system
	a09069dcbe74c       27aa99ef07bb63db109cae7189f6029203a1ba86e8d201ca72eb836e3cdd0b43   7 minutes ago       Running             kube-vip                  0                   0cbc1c071aca4       kube-vip-ha-550225                  kube-system
	ff6f36ad276da       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            1                   356b386bea9bb       kube-scheduler-ha-550225            kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 06:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 06:49] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:03] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [883d49fba5ac5d237dfa6b26b5b95e98f640c5dea3f2599a3b517c0c8be55896] <==
	{"level":"info","ts":"2025-10-02T07:16:36.517513Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to 340e91ee989e8740 at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:36.517527Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to ae3c16a0ff0d2d6f at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:36.517555Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:36.517565Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-10-02T07:16:38.112768Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889187,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-10-02T07:16:38.117957Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:38.118004Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:38.118027Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to 340e91ee989e8740 at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:38.118044Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to ae3c16a0ff0d2d6f at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:38.118085Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:38.118104Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-10-02T07:16:38.613925Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889187,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-02T07:16:38.854437Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"340e91ee989e8740","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-10-02T07:16:38.854512Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"340e91ee989e8740","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-10-02T07:16:38.854526Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ae3c16a0ff0d2d6f","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-02T07:16:38.854559Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ae3c16a0ff0d2d6f","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-02T07:16:39.114100Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889187,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-10-02T07:16:39.614936Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889187,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-10-02T07:16:39.717279Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:39.717334Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:39.717357Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to 340e91ee989e8740 at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:39.717368Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2288] sent MsgPreVote request to ae3c16a0ff0d2d6f at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:39.717402Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-10-02T07:16:39.717413Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-10-02T07:16:40.115935Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128040356167889187,"retry-timeout":"500ms"}
	
	
	==> etcd [d6201e9ebb1f7834795f1ed34af1c1531b7711bfef7ba9ec4f8b86cb19833552] <==
	{"level":"info","ts":"2025-10-02T07:14:08.631118Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631162Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631196Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:14:08.631205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:14:08.631187Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"340e91ee989e8740"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631246Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:14:08.631303Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:14:08.631312Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:14:08.631281Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631330Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631404Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631429Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631449Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631462Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"340e91ee989e8740"}
	{"level":"info","ts":"2025-10-02T07:14:08.631473Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631483Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631503Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631522Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631535Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631547Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.631563Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"ae3c16a0ff0d2d6f"}
	{"level":"info","ts":"2025-10-02T07:14:08.635633Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T07:14:08.635736Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:14:08.635777Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T07:14:08.635785Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-550225","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 07:16:40 up  1:59,  0 user,  load average: 0.29, 0.65, 1.20
	Linux ha-550225 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [ec59b9b67a698e5db18921d0840403ce5d2f6a7b3fccdad48b260332ba50678f] <==
	I1002 07:16:10.211958       1 server.go:150] Version: v1.34.1
	I1002 07:16:10.212068       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1002 07:16:11.752050       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1002 07:16:11.752133       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1002 07:16:11.752166       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1002 07:16:11.752200       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1002 07:16:11.752232       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1002 07:16:11.752261       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1002 07:16:11.752293       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1002 07:16:11.752324       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1002 07:16:11.752356       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1002 07:16:11.752386       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1002 07:16:11.752419       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1002 07:16:11.752463       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	I1002 07:16:11.788269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1002 07:16:11.797033       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:16:11.798587       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1002 07:16:11.811605       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:16:11.815343       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1002 07:16:11.815458       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1002 07:16:11.816365       1 instance.go:239] Using reconciler: lease
	W1002 07:16:11.818696       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:16:31.782202       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:16:31.790759       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1002 07:16:31.817391       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [075b15e6c74a52fc823514f3eb205759d40a99a80d0859594b42aca28159924d] <==
	I1002 07:16:10.596311       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:16:12.050398       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 07:16:12.050434       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:16:12.052007       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 07:16:12.052116       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 07:16:12.052771       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 07:16:12.052830       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:16:32.827381       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [ff6f36ad276da8f6ea87b58c1a6e4675a17751c812adf0bea3fb2ce4a3183dc0] <==
	E1002 07:15:38.442601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:15:41.823331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:15:44.775785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:15:45.258574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:15:46.491372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:15:46.769593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:15:52.124898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:15:57.001159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:15:57.379525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:15:59.973932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:16:00.856989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:16:04.337932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:16:05.218671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:16:22.641657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:16:23.826431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:16:26.580558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:16:29.569675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:16:32.831141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53042->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:16:32.831282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53062->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:16:32.831376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53078->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:16:32.831476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53080->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:16:32.831576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:60646->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:16:32.831659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:53114->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:16:33.912373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:16:39.183606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kubelet <==
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.187566     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.288649     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.389693     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.490142     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.590855     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.691769     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.792610     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.893524     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:38 ha-550225 kubelet[799]: E1002 07:16:38.995022     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.049912     799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-550225?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 07:16:39 ha-550225 kubelet[799]: I1002 07:16:39.068047     799 kubelet_node_status.go:75] "Attempting to register node" node="ha-550225"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.068699     799 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-550225"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.096369     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.197568     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.298966     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.400101     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.500619     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.601579     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.702530     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.803653     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:39 ha-550225 kubelet[799]: E1002 07:16:39.904848     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:40 ha-550225 kubelet[799]: E1002 07:16:40.005941     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:40 ha-550225 kubelet[799]: E1002 07:16:40.107305     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:40 ha-550225 kubelet[799]: E1002 07:16:40.208082     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Oct 02 07:16:40 ha-550225 kubelet[799]: E1002 07:16:40.309101     799 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-550225\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225: exit status 2 (331.42418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-550225" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 stop --alsologtostderr -v 5
E1002 07:16:41.263962  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 stop --alsologtostderr -v 5: (2.483841673s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5: exit status 7 (133.374994ms)

                                                
                                                
-- stdout --
	ha-550225
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-550225-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-550225-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-550225-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:16:43.332039  346500 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:16:43.332184  346500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.332198  346500 out.go:374] Setting ErrFile to fd 2...
	I1002 07:16:43.332204  346500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.332516  346500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:16:43.332743  346500 out.go:368] Setting JSON to false
	I1002 07:16:43.332797  346500 mustload.go:65] Loading cluster: ha-550225
	I1002 07:16:43.332867  346500 notify.go:220] Checking for updates...
	I1002 07:16:43.334067  346500 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:43.334092  346500 status.go:174] checking status of ha-550225 ...
	I1002 07:16:43.334811  346500 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:43.356251  346500 status.go:371] ha-550225 host status = "Stopped" (err=<nil>)
	I1002 07:16:43.356275  346500 status.go:384] host is not running, skipping remaining checks
	I1002 07:16:43.356282  346500 status.go:176] ha-550225 status: &{Name:ha-550225 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:16:43.356315  346500 status.go:174] checking status of ha-550225-m02 ...
	I1002 07:16:43.356629  346500 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:43.379254  346500 status.go:371] ha-550225-m02 host status = "Stopped" (err=<nil>)
	I1002 07:16:43.379322  346500 status.go:384] host is not running, skipping remaining checks
	I1002 07:16:43.379348  346500 status.go:176] ha-550225-m02 status: &{Name:ha-550225-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:16:43.379390  346500 status.go:174] checking status of ha-550225-m03 ...
	I1002 07:16:43.379723  346500 cli_runner.go:164] Run: docker container inspect ha-550225-m03 --format={{.State.Status}}
	I1002 07:16:43.397389  346500 status.go:371] ha-550225-m03 host status = "Stopped" (err=<nil>)
	I1002 07:16:43.397412  346500 status.go:384] host is not running, skipping remaining checks
	I1002 07:16:43.397418  346500 status.go:176] ha-550225-m03 status: &{Name:ha-550225-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:16:43.397438  346500 status.go:174] checking status of ha-550225-m04 ...
	I1002 07:16:43.397780  346500 cli_runner.go:164] Run: docker container inspect ha-550225-m04 --format={{.State.Status}}
	I1002 07:16:43.414609  346500 status.go:371] ha-550225-m04 host status = "Stopped" (err=<nil>)
	I1002 07:16:43.414632  346500 status.go:384] host is not running, skipping remaining checks
	I1002 07:16:43.414638  346500 status.go:176] ha-550225-m04 status: &{Name:ha-550225-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5": ha-550225
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-550225-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-550225-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-550225-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5": ha-550225
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-550225-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-550225-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-550225-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5": ha-550225
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-550225-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-550225-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-550225-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-550225
helpers_test.go:243: (dbg) docker inspect ha-550225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	        "Created": "2025-10-02T07:02:30.539981852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:08:45.398672695Z",
	            "FinishedAt": "2025-10-02T07:16:42.559270036Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c-json.log",
	        "Name": "/ha-550225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-550225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-550225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	                "LowerDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-550225",
	                "Source": "/var/lib/docker/volumes/ha-550225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-550225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-550225",
	                "name.minikube.sigs.k8s.io": "ha-550225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-550225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a294cab4b5d50d5f227902c62678f378fbede9275f1d54f0b3de7a1f36e1a0",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-550225",
	                        "1c1f8ec53310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225: exit status 7 (75.785775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-550225" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (477.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 07:19:28.908016  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:21:41.264747  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:22:31.975837  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:23:04.331503  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:24:28.907652  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-550225 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 105 (7m51.232972839s)

                                                
                                                
-- stdout --
	* [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:16:43.556654  346554 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:16:43.556900  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.556935  346554 out.go:374] Setting ErrFile to fd 2...
	I1002 07:16:43.556957  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.557253  346554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:16:43.557663  346554 out.go:368] Setting JSON to false
	I1002 07:16:43.558546  346554 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7155,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:16:43.558645  346554 start.go:140] virtualization:  
	I1002 07:16:43.562097  346554 out.go:179] * [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:16:43.565995  346554 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:16:43.566065  346554 notify.go:220] Checking for updates...
	I1002 07:16:43.572511  346554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:16:43.575317  346554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:43.578176  346554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:16:43.580964  346554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:16:43.583787  346554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:16:43.587186  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:43.587749  346554 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:16:43.619258  346554 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:16:43.619425  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.676323  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.665454213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.676450  346554 docker.go:318] overlay module found
	I1002 07:16:43.679463  346554 out.go:179] * Using the docker driver based on existing profile
	I1002 07:16:43.682328  346554 start.go:304] selected driver: docker
	I1002 07:16:43.682357  346554 start.go:924] validating driver "docker" against &{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.682550  346554 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:16:43.682661  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.739766  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.730208669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.740206  346554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:16:43.740241  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:43.740306  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:43.740357  346554 start.go:348] cluster config:
	{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.743601  346554 out.go:179] * Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	I1002 07:16:43.746399  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:43.749341  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:43.752288  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:43.752352  346554 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:16:43.752374  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:43.752377  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:43.752484  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:43.752495  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:43.752642  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:43.772750  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:43.772775  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:43.772803  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:43.772827  346554 start.go:360] acquireMachinesLock for ha-550225: {Name:mkc1f009b4f35f6b87d580d72d0a621c44a033f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:43.772899  346554 start.go:364] duration metric: took 46.236µs to acquireMachinesLock for "ha-550225"
	I1002 07:16:43.772922  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:43.772934  346554 fix.go:54] fixHost starting: 
	I1002 07:16:43.773187  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:43.794446  346554 fix.go:112] recreateIfNeeded on ha-550225: state=Stopped err=<nil>
	W1002 07:16:43.794478  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:43.797824  346554 out.go:252] * Restarting existing docker container for "ha-550225" ...
	I1002 07:16:43.797912  346554 cli_runner.go:164] Run: docker start ha-550225
	I1002 07:16:44.052064  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:44.071577  346554 kic.go:430] container "ha-550225" state is running.
	I1002 07:16:44.071977  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:44.097000  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:44.097247  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:44.097316  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:44.119603  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:44.120087  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:44.120103  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:44.120661  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57572->127.0.0.1:33188: read: connection reset by peer
	I1002 07:16:47.250760  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.250786  346554 ubuntu.go:182] provisioning hostname "ha-550225"
	I1002 07:16:47.250888  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.268212  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.268525  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.268543  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225 && echo "ha-550225" | sudo tee /etc/hostname
	I1002 07:16:47.408749  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.408837  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.428229  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.428559  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.428582  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:47.563394  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:47.563422  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:47.563445  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:47.563480  346554 provision.go:84] configureAuth start
	I1002 07:16:47.563555  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:47.583742  346554 provision.go:143] copyHostCerts
	I1002 07:16:47.583804  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583843  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:47.583865  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583942  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:47.584044  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584067  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:47.584076  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584105  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:47.584165  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584188  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:47.584197  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584232  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:47.584294  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225 san=[127.0.0.1 192.168.49.2 ha-550225 localhost minikube]
	I1002 07:16:49.085710  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:49.085804  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:49.085919  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.102600  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.203033  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:49.203111  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:49.220709  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:49.220773  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:16:49.238283  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:49.238380  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:49.255763  346554 provision.go:87] duration metric: took 1.692265184s to configureAuth
	I1002 07:16:49.255832  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:49.256105  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:49.256221  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.273296  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:49.273613  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:49.273636  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:49.545258  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:49.545281  346554 machine.go:96] duration metric: took 5.448016594s to provisionDockerMachine
	I1002 07:16:49.545292  346554 start.go:293] postStartSetup for "ha-550225" (driver="docker")
	I1002 07:16:49.545335  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:49.545400  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:49.545448  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.562765  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.663440  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:49.667012  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:49.667043  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:49.667055  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:49.667131  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:49.667227  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:49.667243  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:49.667356  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:49.675157  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:49.693566  346554 start.go:296] duration metric: took 148.259083ms for postStartSetup
	I1002 07:16:49.693674  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:49.693733  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.711628  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.808263  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:49.813222  346554 fix.go:56] duration metric: took 6.040285845s for fixHost
	I1002 07:16:49.813250  346554 start.go:83] releasing machines lock for "ha-550225", held for 6.040338171s
	I1002 07:16:49.813321  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:49.832086  346554 ssh_runner.go:195] Run: cat /version.json
	I1002 07:16:49.832138  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.832170  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:49.832223  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.860178  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.874339  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.958866  346554 ssh_runner.go:195] Run: systemctl --version
	I1002 07:16:50.049981  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:50.088401  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:50.093782  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:50.093888  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:50.102679  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:50.102707  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:50.102739  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:50.102790  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:50.119025  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:50.132406  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:50.132508  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:50.147702  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:50.161840  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:50.285662  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:16:50.412243  346554 docker.go:234] disabling docker service ...
	I1002 07:16:50.412358  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:16:50.429880  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:16:50.443435  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:16:50.570143  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:16:50.705200  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:16:50.718349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:16:50.732391  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:16:50.732489  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.741688  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:16:50.741842  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.751301  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.760089  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.769286  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:16:50.777484  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.786723  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.795606  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.804393  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:16:50.812287  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:16:50.819774  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:50.940841  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:16:51.084825  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:16:51.084933  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:16:51.088952  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:16:51.089022  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:16:51.093255  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:16:51.121871  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:16:51.122035  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.151306  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.186151  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:16:51.188993  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:16:51.205719  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:16:51.209600  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.219722  346554 kubeadm.go:883] updating cluster {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:16:51.219870  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:51.219932  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.259348  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.259373  346554 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:16:51.259435  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.285823  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.285850  346554 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:16:51.285860  346554 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:16:51.285975  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:16:51.286067  346554 ssh_runner.go:195] Run: crio config
	I1002 07:16:51.349840  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:51.349864  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:51.349907  346554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:16:51.349941  346554 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-550225 NodeName:ha-550225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:16:51.350123  346554 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-550225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:16:51.350149  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:16:51.350220  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:16:51.362455  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:51.362590  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:16:51.362683  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:16:51.370716  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:16:51.370824  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 07:16:51.378562  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:16:51.392384  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:16:51.405890  346554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1002 07:16:51.418852  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:16:51.431748  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:16:51.435456  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.445200  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:51.564279  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:16:51.580309  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.2
	I1002 07:16:51.580335  346554 certs.go:195] generating shared ca certs ...
	I1002 07:16:51.580352  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:51.580577  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:16:51.580643  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:16:51.580658  346554 certs.go:257] generating profile certs ...
	I1002 07:16:51.580760  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:16:51.580851  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa
	I1002 07:16:51.580915  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:16:51.580931  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:16:51.580960  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:16:51.580981  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:16:51.581001  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:16:51.581029  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:16:51.581060  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:16:51.581082  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:16:51.581099  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:16:51.581172  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:16:51.581223  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:16:51.581238  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:16:51.581269  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:16:51.581323  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:16:51.581355  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:16:51.581425  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:51.581476  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.581497  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.581511  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.582046  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:16:51.608528  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:16:51.630032  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:16:51.651693  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:16:51.672816  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:16:51.694334  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:16:51.713045  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:16:51.734929  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:16:51.759074  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:16:51.783798  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:16:51.810129  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:16:51.829572  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:16:51.844038  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:16:51.850521  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:16:51.859107  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863052  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863200  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.905139  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:16:51.915686  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:16:51.924646  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928631  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928697  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.970474  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:16:51.979037  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:16:51.988282  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992329  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992400  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:16:52.034608  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:16:52.043437  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:16:52.047807  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:16:52.090171  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:16:52.132189  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:16:52.173672  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:16:52.215246  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:16:52.259493  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:16:52.303359  346554 kubeadm.go:400] StartCluster: {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:52.303541  346554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:16:52.303637  346554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:16:52.411948  346554 cri.go:89] found id: ""
	I1002 07:16:52.412087  346554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:16:52.423926  346554 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:16:52.423985  346554 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:16:52.424072  346554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:16:52.435971  346554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:52.436519  346554 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-550225" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.436691  346554 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-550225" cluster setting kubeconfig missing "ha-550225" context setting]
	I1002 07:16:52.436999  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.437624  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:16:52.438178  346554 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:16:52.438372  346554 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:16:52.438396  346554 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:16:52.438439  346554 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:16:52.438479  346554 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:16:52.438242  346554 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:16:52.438946  346554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:16:52.453843  346554 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:16:52.453908  346554 kubeadm.go:601] duration metric: took 29.902711ms to restartPrimaryControlPlane
	I1002 07:16:52.454041  346554 kubeadm.go:402] duration metric: took 150.691034ms to StartCluster
	I1002 07:16:52.454081  346554 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.454172  346554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.454859  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.455192  346554 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:16:52.455245  346554 start.go:241] waiting for startup goroutines ...
	I1002 07:16:52.455279  346554 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:16:52.455778  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.480332  346554 out.go:179] * Enabled addons: 
	I1002 07:16:52.484238  346554 addons.go:514] duration metric: took 28.941955ms for enable addons: enabled=[]
	I1002 07:16:52.484336  346554 start.go:246] waiting for cluster config update ...
	I1002 07:16:52.484369  346554 start.go:255] writing updated cluster config ...
	I1002 07:16:52.488274  346554 out.go:203] 
	I1002 07:16:52.492458  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.492645  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.496127  346554 out.go:179] * Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	I1002 07:16:52.499195  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:52.502435  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:52.505497  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:52.505566  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:52.505677  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:52.505807  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:52.505838  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:52.506003  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.530361  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:52.530380  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:52.530392  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:52.530415  346554 start.go:360] acquireMachinesLock for ha-550225-m02: {Name:mk11ef625bc214163cbeacdb736ddec4214a8374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:52.530475  346554 start.go:364] duration metric: took 37.3µs to acquireMachinesLock for "ha-550225-m02"
	I1002 07:16:52.530499  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:52.530506  346554 fix.go:54] fixHost starting: m02
	I1002 07:16:52.530790  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:52.559198  346554 fix.go:112] recreateIfNeeded on ha-550225-m02: state=Stopped err=<nil>
	W1002 07:16:52.559226  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:52.563143  346554 out.go:252] * Restarting existing docker container for "ha-550225-m02" ...
	I1002 07:16:52.563247  346554 cli_runner.go:164] Run: docker start ha-550225-m02
	I1002 07:16:52.985736  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:53.019972  346554 kic.go:430] container "ha-550225-m02" state is running.
	I1002 07:16:53.020350  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:53.045172  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:53.045437  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:53.045501  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:53.087166  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:53.087519  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:53.087528  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:53.088138  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45188->127.0.0.1:33193: read: connection reset by peer
	I1002 07:16:56.311713  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.311782  346554 ubuntu.go:182] provisioning hostname "ha-550225-m02"
	I1002 07:16:56.311878  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.344609  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.344917  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.344929  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225-m02 && echo "ha-550225-m02" | sudo tee /etc/hostname
	I1002 07:16:56.639669  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.639788  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.668649  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.668967  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.668991  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:56.892812  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:56.892848  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:56.892865  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:56.892886  346554 provision.go:84] configureAuth start
	I1002 07:16:56.892966  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:56.931268  346554 provision.go:143] copyHostCerts
	I1002 07:16:56.931313  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931346  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:56.931357  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931436  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:56.931520  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931541  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:56.931548  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931576  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:56.931619  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931640  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:56.931645  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931673  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:56.931727  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225-m02 san=[127.0.0.1 192.168.49.3 ha-550225-m02 localhost minikube]
	I1002 07:16:57.380087  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:57.380161  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:57.380209  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.399377  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:57.503607  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:57.503674  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:57.534864  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:57.534935  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:16:57.579624  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:57.579686  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:57.613798  346554 provision.go:87] duration metric: took 720.891298ms to configureAuth
	I1002 07:16:57.613866  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:57.614125  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:57.614268  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.655334  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:57.655649  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:57.655669  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:58.296218  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:58.296241  346554 machine.go:96] duration metric: took 5.250794733s to provisionDockerMachine
	I1002 07:16:58.296266  346554 start.go:293] postStartSetup for "ha-550225-m02" (driver="docker")
	I1002 07:16:58.296279  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:58.296361  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:58.296407  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.334246  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.454625  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:58.462912  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:58.462946  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:58.462957  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:58.463024  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:58.463132  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:58.463146  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:58.463245  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:58.476350  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:58.502934  346554 start.go:296] duration metric: took 206.651168ms for postStartSetup
	I1002 07:16:58.503074  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:58.503140  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.541010  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.704044  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:58.724725  346554 fix.go:56] duration metric: took 6.194210695s for fixHost
	I1002 07:16:58.724751  346554 start.go:83] releasing machines lock for "ha-550225-m02", held for 6.194264053s
	I1002 07:16:58.724830  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:58.757236  346554 out.go:179] * Found network options:
	I1002 07:16:58.760259  346554 out.go:179]   - NO_PROXY=192.168.49.2
	W1002 07:16:58.763701  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	W1002 07:16:58.763752  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	I1002 07:16:58.763820  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:58.763852  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:58.763870  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.763907  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.799805  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.800051  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:59.297366  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:59.320265  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:59.320354  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:59.335012  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:59.335039  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:59.335070  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:59.335161  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:59.357972  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:59.378445  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:59.378521  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:59.402692  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:59.423049  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:59.777657  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:17:00.088553  346554 docker.go:234] disabling docker service ...
	I1002 07:17:00.088656  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:17:00.130593  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:17:00.210008  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:17:00.633988  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:17:01.021589  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:17:01.054167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:17:01.092894  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:17:01.092980  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.111830  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:17:01.111928  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.139965  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.151897  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.168595  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:17:01.186410  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.204646  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.221763  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.236700  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:17:01.257944  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:17:01.272835  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:17:01.618372  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:18:32.051852  346554 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.433435555s)
	I1002 07:18:32.051878  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:18:32.051938  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:18:32.056156  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:18:32.056222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:18:32.060117  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:18:32.088770  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:18:32.088860  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.119432  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.154051  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:18:32.156909  346554 out.go:179]   - env NO_PROXY=192.168.49.2
	I1002 07:18:32.159957  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:18:32.177164  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:18:32.181230  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:32.191471  346554 mustload.go:65] Loading cluster: ha-550225
	I1002 07:18:32.191729  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:32.191999  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:18:32.209130  346554 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:18:32.209416  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.3
	I1002 07:18:32.209433  346554 certs.go:195] generating shared ca certs ...
	I1002 07:18:32.209448  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:18:32.209574  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:18:32.209622  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:18:32.209635  346554 certs.go:257] generating profile certs ...
	I1002 07:18:32.209712  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:18:32.209761  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.e172f685
	I1002 07:18:32.209802  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:18:32.209816  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:18:32.209829  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:18:32.209843  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:18:32.209855  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:18:32.209869  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:18:32.209883  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:18:32.209898  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:18:32.209908  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:18:32.209964  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:18:32.209998  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:18:32.210010  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:18:32.210033  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:18:32.210061  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:18:32.210089  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:18:32.210137  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:18:32.210168  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.210187  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.210198  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.210261  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:18:32.227689  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:18:32.315413  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1002 07:18:32.319445  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1002 07:18:32.328111  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1002 07:18:32.331777  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1002 07:18:32.340081  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1002 07:18:32.343746  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1002 07:18:32.351558  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1002 07:18:32.354911  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1002 07:18:32.362878  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1002 07:18:32.366632  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1002 07:18:32.374581  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1002 07:18:32.378281  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1002 07:18:32.386552  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:18:32.405394  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:18:32.422759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:18:32.440360  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:18:32.457759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:18:32.475843  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:18:32.493288  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:18:32.510289  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:18:32.527991  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:18:32.545549  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:18:32.562952  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:18:32.580383  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1002 07:18:32.593477  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1002 07:18:32.606933  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1002 07:18:32.619772  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1002 07:18:32.634020  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1002 07:18:32.646873  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1002 07:18:32.659836  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1002 07:18:32.673417  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:18:32.679719  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:18:32.688081  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692003  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692135  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.733286  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:18:32.741334  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:18:32.749624  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753431  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753505  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.794364  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:18:32.802247  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:18:32.810290  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813847  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813927  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.854739  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:18:32.862471  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:18:32.866281  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:18:32.907787  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:18:32.948617  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:18:32.989448  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:18:33.030881  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:18:33.074016  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:18:33.117026  346554 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1002 07:18:33.117170  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:18:33.117220  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:18:33.117288  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:18:33.133837  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:18:33.133931  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:18:33.134029  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:18:33.142503  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:18:33.142627  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1002 07:18:33.150436  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 07:18:33.163196  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:18:33.176800  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:18:33.191119  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:18:33.195012  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:33.205076  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.339361  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.353170  346554 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:18:33.353495  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:33.359500  346554 out.go:179] * Verifying Kubernetes components...
	I1002 07:18:33.362288  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.491257  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.505467  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1002 07:18:33.505560  346554 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1002 07:18:33.505989  346554 node_ready.go:35] waiting up to 6m0s for node "ha-550225-m02" to be "Ready" ...
	W1002 07:18:35.506749  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:38.010468  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:40.016084  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:42.506872  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:44.507212  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:47.007659  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:49.506544  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:51.506605  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:18:54.785251  346554 node_ready.go:49] node "ha-550225-m02" is "Ready"
	I1002 07:18:54.785285  346554 node_ready.go:38] duration metric: took 21.279267345s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:18:54.785300  346554 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:18:54.785382  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.286257  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.786278  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.285480  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.785495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.286432  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.786472  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.285596  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.786260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.286148  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.785674  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.286401  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.786468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.286310  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.786133  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.285476  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.285578  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.785477  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.285835  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.786152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.285495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.785558  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.285602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.785496  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.286468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.786358  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.286294  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.786349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.286208  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.786292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.285577  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.785589  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.286341  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.286415  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.786007  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.286205  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.786328  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.285849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.786397  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.285488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.785431  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.285445  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.785468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.285527  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.785637  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.285535  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.786137  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.286152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.786052  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.785522  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.285716  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.786849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.286372  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.786418  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.286092  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.786120  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.285506  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.785439  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.286469  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.785780  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.785611  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.286260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.785499  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.285509  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.785521  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.285762  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.786049  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.286329  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.785543  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.285473  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.786013  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.285818  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.785931  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.285557  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.786122  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:33.786216  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:33.819648  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:33.819668  346554 cri.go:89] found id: ""
	I1002 07:19:33.819678  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:33.819746  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.823889  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:33.823960  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:33.855251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:33.855272  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:33.855277  346554 cri.go:89] found id: ""
	I1002 07:19:33.855285  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:33.855351  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.858992  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.862888  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:33.862975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:33.894144  346554 cri.go:89] found id: ""
	I1002 07:19:33.894169  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.894178  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:33.894184  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:33.894243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:33.921104  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:33.921125  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:33.921130  346554 cri.go:89] found id: ""
	I1002 07:19:33.921137  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:33.921194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.925016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.928536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:33.928631  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:33.961082  346554 cri.go:89] found id: ""
	I1002 07:19:33.961111  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.961121  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:33.961127  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:33.961187  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:33.993876  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:33.993901  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:33.993906  346554 cri.go:89] found id: ""
	I1002 07:19:33.993916  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:33.993979  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.999741  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:34.004783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:34.004869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:34.034228  346554 cri.go:89] found id: ""
	I1002 07:19:34.034256  346554 logs.go:282] 0 containers: []
	W1002 07:19:34.034265  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:34.034275  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:34.034288  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:34.096737  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:34.096779  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:34.132301  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:34.132339  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:34.182701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:34.182737  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:34.217015  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:34.217044  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:34.232712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:34.232741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:34.652633  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:34.652655  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:34.652669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:34.681086  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:34.681118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:34.708033  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:34.708062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:34.793299  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:34.793407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:34.848620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:34.848649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:34.948533  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:34.948572  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.477483  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:37.488961  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:37.489035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:37.518325  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.518349  346554 cri.go:89] found id: ""
	I1002 07:19:37.518358  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:37.518419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.522140  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:37.522269  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:37.549073  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.549093  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.549098  346554 cri.go:89] found id: ""
	I1002 07:19:37.549105  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:37.549190  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.552869  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.556417  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:37.556497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:37.589096  346554 cri.go:89] found id: ""
	I1002 07:19:37.589122  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.589130  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:37.589137  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:37.589199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:37.615330  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:37.615354  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.615360  346554 cri.go:89] found id: ""
	I1002 07:19:37.615367  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:37.615424  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.619166  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.622673  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:37.622742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:37.648426  346554 cri.go:89] found id: ""
	I1002 07:19:37.648458  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.648467  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:37.648474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:37.648536  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:37.676515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:37.676536  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:37.676541  346554 cri.go:89] found id: ""
	I1002 07:19:37.676549  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:37.676605  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.680280  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.684478  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:37.684552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:37.710689  346554 cri.go:89] found id: ""
	I1002 07:19:37.710713  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.710722  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:37.710731  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:37.710741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:37.807134  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:37.807171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:37.877814  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:37.877839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:37.877853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.920820  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:37.920854  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.956765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:37.956802  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.985482  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:37.985510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:38.017517  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:38.017548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:38.100846  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:38.100884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:38.136290  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:38.136318  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:38.151732  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:38.151763  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:38.177792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:38.177822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:38.229226  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:38.229260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.756410  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:40.767378  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:40.767448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:40.799187  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:40.799205  346554 cri.go:89] found id: ""
	I1002 07:19:40.799213  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:40.799268  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.804369  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:40.804454  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:40.830559  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:40.830628  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:40.830652  346554 cri.go:89] found id: ""
	I1002 07:19:40.830679  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:40.830771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.835205  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.839714  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:40.839827  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:40.867014  346554 cri.go:89] found id: ""
	I1002 07:19:40.867039  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.867048  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:40.867054  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:40.867141  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:40.905810  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:40.905829  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:40.905835  346554 cri.go:89] found id: ""
	I1002 07:19:40.905842  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:40.905898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.909648  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.913397  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:40.913471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:40.940488  346554 cri.go:89] found id: ""
	I1002 07:19:40.940511  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.940520  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:40.940526  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:40.940585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:40.968408  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:40.968429  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.968439  346554 cri.go:89] found id: ""
	I1002 07:19:40.968447  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:40.968503  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.972336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.976070  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:40.976163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:41.010288  346554 cri.go:89] found id: ""
	I1002 07:19:41.010318  346554 logs.go:282] 0 containers: []
	W1002 07:19:41.010328  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:41.010338  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:41.010353  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:41.058706  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:41.058741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:41.085223  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:41.085252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:41.117537  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:41.117564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:41.218224  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:41.218265  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:41.234686  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:41.234727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:41.270240  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:41.270276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:41.321885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:41.321922  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:41.350649  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:41.350684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:41.382710  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:41.382740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:41.465872  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:41.465911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:41.547196  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:41.547220  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:41.547234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.074126  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:44.087746  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:44.087861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:44.116198  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.116223  346554 cri.go:89] found id: ""
	I1002 07:19:44.116232  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:44.116290  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.120227  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:44.120325  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:44.146916  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.146943  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.146948  346554 cri.go:89] found id: ""
	I1002 07:19:44.146955  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:44.147009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.151266  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.155925  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:44.156012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:44.190430  346554 cri.go:89] found id: ""
	I1002 07:19:44.190458  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.190467  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:44.190473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:44.190529  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:44.219366  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.219387  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.219392  346554 cri.go:89] found id: ""
	I1002 07:19:44.219400  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:44.219455  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.223324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.226924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:44.227000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:44.252543  346554 cri.go:89] found id: ""
	I1002 07:19:44.252566  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.252576  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:44.252583  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:44.252650  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:44.280466  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.280489  346554 cri.go:89] found id: ""
	I1002 07:19:44.280498  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:44.280559  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.284050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:44.284122  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:44.314223  346554 cri.go:89] found id: ""
	I1002 07:19:44.314250  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.314259  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:44.314269  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:44.314304  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.340933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:44.340965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.377320  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:44.377352  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.411349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:44.411377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:44.516647  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:44.516695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:44.585736  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:44.585771  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:44.585785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.629867  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:44.629909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.681709  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:44.681750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.710536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:44.710566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:44.801698  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:44.801744  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:44.834146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:44.834175  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:47.351602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:47.362458  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:47.362546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:47.391769  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.391792  346554 cri.go:89] found id: ""
	I1002 07:19:47.391802  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:47.391863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.395882  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:47.395971  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:47.428129  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:47.428151  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.428156  346554 cri.go:89] found id: ""
	I1002 07:19:47.428164  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:47.428225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.432313  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.436344  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:47.436415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:47.464208  346554 cri.go:89] found id: ""
	I1002 07:19:47.464230  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.464238  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:47.464244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:47.464302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:47.494674  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.494731  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:47.494773  346554 cri.go:89] found id: ""
	I1002 07:19:47.494800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:47.494885  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.499610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.503658  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:47.503779  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:47.532490  346554 cri.go:89] found id: ""
	I1002 07:19:47.532517  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.532527  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:47.532534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:47.532599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:47.565084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:47.565122  346554 cri.go:89] found id: ""
	I1002 07:19:47.565131  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:47.565231  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.569404  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:47.569483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:47.597243  346554 cri.go:89] found id: ""
	I1002 07:19:47.597266  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.597275  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:47.597284  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:47.597294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:47.693710  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:47.693748  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:47.771715  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:47.771739  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:47.771752  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.810005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:47.810090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.890792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:47.890824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.977230  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:47.977271  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:48.018612  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:48.018643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:48.105364  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:48.105401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:48.124841  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:48.124870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:48.193027  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:48.193069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:48.239251  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:48.239279  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:50.782662  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:50.794011  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:50.794105  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:50.838191  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:50.838216  346554 cri.go:89] found id: ""
	I1002 07:19:50.838225  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:50.838286  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.842655  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:50.842755  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:50.891807  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:50.891833  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:50.891839  346554 cri.go:89] found id: ""
	I1002 07:19:50.891847  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:50.891964  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.899196  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.904048  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:50.904143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:50.939603  346554 cri.go:89] found id: ""
	I1002 07:19:50.939626  346554 logs.go:282] 0 containers: []
	W1002 07:19:50.939635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:50.939641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:50.939735  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:50.971030  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:50.971053  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:50.971059  346554 cri.go:89] found id: ""
	I1002 07:19:50.971067  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:50.971179  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.975612  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.980140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:50.980242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:51.025029  346554 cri.go:89] found id: ""
	I1002 07:19:51.025055  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.025064  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:51.025071  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:51.025186  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:51.058743  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.058764  346554 cri.go:89] found id: ""
	I1002 07:19:51.058772  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:51.058862  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:51.064931  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:51.065035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:51.101431  346554 cri.go:89] found id: ""
	I1002 07:19:51.101462  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.101486  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:51.101498  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:51.101531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:51.126461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:51.126494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:51.217174  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:51.217200  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:51.217216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:51.279369  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:51.279449  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:51.337216  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:51.337253  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:51.425630  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:51.425669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:51.528560  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:51.528601  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:51.556690  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:51.556719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:51.600118  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:51.600251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:51.632616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:51.632650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.662904  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:51.662935  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.196274  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:54.207476  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:54.207546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:54.238643  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.238664  346554 cri.go:89] found id: ""
	I1002 07:19:54.238673  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:54.238729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.242382  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:54.242456  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:54.274345  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.274377  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.274383  346554 cri.go:89] found id: ""
	I1002 07:19:54.274390  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:54.274451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.278686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.283146  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:54.283225  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:54.315609  346554 cri.go:89] found id: ""
	I1002 07:19:54.315635  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.315645  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:54.315652  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:54.315718  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:54.343684  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.343709  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.343715  346554 cri.go:89] found id: ""
	I1002 07:19:54.343723  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:54.343789  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.347649  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.351327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:54.351428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:54.380301  346554 cri.go:89] found id: ""
	I1002 07:19:54.380336  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.380346  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:54.380353  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:54.380440  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:54.413081  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:54.413105  346554 cri.go:89] found id: ""
	I1002 07:19:54.413114  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:54.413172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.417107  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:54.417181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:54.450903  346554 cri.go:89] found id: ""
	I1002 07:19:54.450930  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.450947  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:54.450957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:54.450972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:54.551509  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:54.551550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:54.567991  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:54.568018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:54.641344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:54.641366  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:54.641403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.677557  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:54.677592  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.742382  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:54.742417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:54.830648  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:54.830681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.866699  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:54.866727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.893138  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:54.893166  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.942885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:54.942920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.977070  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:54.977098  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.528866  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:57.540731  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:57.540803  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:57.571921  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:57.571945  346554 cri.go:89] found id: ""
	I1002 07:19:57.571954  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:57.572028  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.575942  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:57.576018  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:57.604185  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.604219  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.604224  346554 cri.go:89] found id: ""
	I1002 07:19:57.604232  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:57.604326  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.608202  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.611833  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:57.611912  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:57.640401  346554 cri.go:89] found id: ""
	I1002 07:19:57.640431  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.640440  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:57.640447  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:57.640519  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:57.671538  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:57.671560  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:57.671565  346554 cri.go:89] found id: ""
	I1002 07:19:57.671572  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:57.671629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.675430  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.679760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:57.679837  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:57.707483  346554 cri.go:89] found id: ""
	I1002 07:19:57.707511  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.707521  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:57.707527  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:57.707592  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:57.736308  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.736330  346554 cri.go:89] found id: ""
	I1002 07:19:57.736338  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:57.736407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.740334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:57.740505  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:57.771488  346554 cri.go:89] found id: ""
	I1002 07:19:57.771558  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.771575  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:57.771585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:57.771599  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.824974  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:57.825013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.862787  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:57.862825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.891348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:57.891374  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:57.923682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:57.923711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:57.996115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:57.996139  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:57.996155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:58.033126  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:58.033198  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:58.106377  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:58.106415  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:58.139224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:58.139252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:58.226478  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:58.226525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:58.331297  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:58.331338  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:00.847448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:00.859829  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:00.859905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:00.887965  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:00.888039  346554 cri.go:89] found id: ""
	I1002 07:20:00.888063  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:00.888133  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.892548  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:00.892623  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:00.922567  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:00.922586  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:00.922591  346554 cri.go:89] found id: ""
	I1002 07:20:00.922598  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:00.922653  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.926435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.930250  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:00.930339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:00.959728  346554 cri.go:89] found id: ""
	I1002 07:20:00.959759  346554 logs.go:282] 0 containers: []
	W1002 07:20:00.959769  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:00.959777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:00.959861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:00.988254  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:00.988317  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:00.988338  346554 cri.go:89] found id: ""
	I1002 07:20:00.988365  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:00.988466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.993016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.996699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:00.996818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:01.024791  346554 cri.go:89] found id: ""
	I1002 07:20:01.024815  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.024823  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:01.024849  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:01.024931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:01.056703  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.056728  346554 cri.go:89] found id: ""
	I1002 07:20:01.056737  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:01.056820  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:01.061200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:01.061302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:01.092652  346554 cri.go:89] found id: ""
	I1002 07:20:01.092680  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.092690  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:01.092701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:01.092715  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.121048  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:01.121084  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:01.227967  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:01.228007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:01.246697  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:01.246728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:01.299528  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:01.299606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:01.329789  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:01.329875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:01.412310  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:01.412348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:01.449621  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:01.449651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:01.528807  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:01.528832  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:01.528848  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:01.557543  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:01.557575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:01.606902  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:01.607007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.163648  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:04.175704  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:04.175798  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:04.202895  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.202920  346554 cri.go:89] found id: ""
	I1002 07:20:04.202929  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:04.202988  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.206773  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:04.206847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:04.237461  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.237484  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.237490  346554 cri.go:89] found id: ""
	I1002 07:20:04.237497  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:04.237551  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.241192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.244646  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:04.244721  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:04.271145  346554 cri.go:89] found id: ""
	I1002 07:20:04.271172  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.271181  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:04.271188  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:04.271290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:04.301758  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.301787  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.301792  346554 cri.go:89] found id: ""
	I1002 07:20:04.301800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:04.301858  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.305658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.309360  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:04.309437  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:04.339291  346554 cri.go:89] found id: ""
	I1002 07:20:04.339317  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.339339  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:04.339347  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:04.339417  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:04.366771  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.366841  346554 cri.go:89] found id: ""
	I1002 07:20:04.366866  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:04.366961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.371032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:04.371213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:04.396810  346554 cri.go:89] found id: ""
	I1002 07:20:04.396889  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.396905  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:04.396916  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:04.396933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:04.414258  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:04.414291  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.478315  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:04.478395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.536808  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:04.536847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.564995  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:04.565025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.592902  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:04.592931  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:04.671813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:04.671849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:04.710652  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:04.710684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:04.820627  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:04.820664  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:04.897187  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:04.897212  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:04.897229  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.936329  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:04.936358  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.496901  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:07.514473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:07.514547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:07.540993  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.541017  346554 cri.go:89] found id: ""
	I1002 07:20:07.541025  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:07.541109  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.545015  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:07.545090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:07.572646  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.572670  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.572675  346554 cri.go:89] found id: ""
	I1002 07:20:07.572683  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:07.572763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.576707  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.580612  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:07.580684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:07.606885  346554 cri.go:89] found id: ""
	I1002 07:20:07.606909  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.606917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:07.606923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:07.606980  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:07.633971  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.634051  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.634072  346554 cri.go:89] found id: ""
	I1002 07:20:07.634115  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:07.634212  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.638009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.641489  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:07.641558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:07.669226  346554 cri.go:89] found id: ""
	I1002 07:20:07.669252  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.669262  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:07.669269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:07.669328  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:07.697084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.697110  346554 cri.go:89] found id: ""
	I1002 07:20:07.697119  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:07.697218  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.702023  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:07.702125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:07.729244  346554 cri.go:89] found id: ""
	I1002 07:20:07.729270  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.729279  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:07.729289  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:07.729305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.774187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:07.774226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.840113  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:07.840153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.873716  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:07.873757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:07.891261  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:07.891289  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.916233  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:07.916263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.952299  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:07.952332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.986719  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:07.986746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:08.071303  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:08.071345  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:08.108002  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:08.108028  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:08.210536  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:08.210576  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:08.294093  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:10.795316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:10.809081  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:10.809162  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:10.842834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:10.842857  346554 cri.go:89] found id: ""
	I1002 07:20:10.842866  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:10.842923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.846661  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:10.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:10.885119  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:10.885154  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:10.885160  346554 cri.go:89] found id: ""
	I1002 07:20:10.885167  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:10.885227  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.888993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.892673  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:10.892745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:10.919884  346554 cri.go:89] found id: ""
	I1002 07:20:10.919910  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.919920  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:10.919926  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:10.919986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:10.948791  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:10.948813  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:10.948818  346554 cri.go:89] found id: ""
	I1002 07:20:10.948832  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:10.948888  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.952760  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.956362  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:10.956465  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:10.984495  346554 cri.go:89] found id: ""
	I1002 07:20:10.984518  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.984528  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:10.984535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:10.984636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:11.017757  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.017840  346554 cri.go:89] found id: ""
	I1002 07:20:11.017854  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:11.017923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:11.022016  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:11.022121  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:11.049783  346554 cri.go:89] found id: ""
	I1002 07:20:11.049807  346554 logs.go:282] 0 containers: []
	W1002 07:20:11.049816  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:11.049826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:11.049858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:11.130029  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:11.130050  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:11.130065  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:11.158585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:11.158617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:11.206663  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:11.206698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:11.251780  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:11.251812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:11.320488  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:11.320524  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:11.401025  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:11.401061  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:11.509831  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:11.509925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:11.528908  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:11.528984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:11.560309  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:11.560340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.587476  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:11.587505  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.117921  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:14.129181  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:14.129256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:14.155142  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.155165  346554 cri.go:89] found id: ""
	I1002 07:20:14.155174  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:14.155234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.158996  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:14.159072  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:14.187368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.187439  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.187451  346554 cri.go:89] found id: ""
	I1002 07:20:14.187459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:14.187516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.191550  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.195394  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:14.195489  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:14.221702  346554 cri.go:89] found id: ""
	I1002 07:20:14.221731  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.221741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:14.221748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:14.221805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:14.250745  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.250768  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:14.250774  346554 cri.go:89] found id: ""
	I1002 07:20:14.250781  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:14.250840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.254464  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.257656  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:14.257732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:14.287657  346554 cri.go:89] found id: ""
	I1002 07:20:14.287684  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.287693  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:14.287699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:14.287763  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:14.317647  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.317670  346554 cri.go:89] found id: ""
	I1002 07:20:14.317680  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:14.317738  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.321550  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:14.321664  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:14.347420  346554 cri.go:89] found id: ""
	I1002 07:20:14.347445  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.347455  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:14.347465  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:14.347476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:14.428069  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:14.428106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.482408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:14.482447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.534003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:14.534036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.587616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:14.587652  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.615153  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:14.615189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.649482  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:14.649517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:14.745400  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:14.745440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:14.765273  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:14.765307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:14.841087  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:14.841109  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:14.841123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.867206  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:14.867236  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.396729  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:17.407809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:17.407882  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:17.435626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.435649  346554 cri.go:89] found id: ""
	I1002 07:20:17.435667  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:17.435729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.440093  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:17.440173  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:17.481710  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.481732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.481738  346554 cri.go:89] found id: ""
	I1002 07:20:17.481745  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:17.481808  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.488857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.492676  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:17.492748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:17.535179  346554 cri.go:89] found id: ""
	I1002 07:20:17.535251  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.535277  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:17.535317  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:17.535404  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:17.567305  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:17.567330  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.567335  346554 cri.go:89] found id: ""
	I1002 07:20:17.567343  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:17.567405  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.572504  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.576436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:17.576540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:17.604459  346554 cri.go:89] found id: ""
	I1002 07:20:17.604489  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.604498  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:17.604504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:17.604568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:17.632230  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.632254  346554 cri.go:89] found id: ""
	I1002 07:20:17.632263  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:17.632352  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.636309  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:17.636416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:17.664031  346554 cri.go:89] found id: ""
	I1002 07:20:17.664058  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.664068  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:17.664078  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:17.664090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.690836  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:17.690911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.720348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:17.720376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:17.752215  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:17.752295  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:17.855749  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:17.855789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:17.872293  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:17.872320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.923506  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:17.923540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.971187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:17.971220  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:18.041592  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:18.041630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:18.085650  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:18.085682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:18.171333  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:18.171372  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:18.244409  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:20.746282  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:20.757663  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:20.757743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:20.787729  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:20.787751  346554 cri.go:89] found id: ""
	I1002 07:20:20.787760  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:20.787845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.792330  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:20.792424  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:20.829800  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:20.829824  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:20.829830  346554 cri.go:89] found id: ""
	I1002 07:20:20.829838  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:20.829899  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.833952  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.837642  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:20.837723  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:20.867702  346554 cri.go:89] found id: ""
	I1002 07:20:20.867725  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.867734  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:20.867740  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:20.867830  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:20.908994  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:20.909016  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:20.909022  346554 cri.go:89] found id: ""
	I1002 07:20:20.909029  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:20.909085  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.913045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.916567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:20.916643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:20.947545  346554 cri.go:89] found id: ""
	I1002 07:20:20.947571  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.947581  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:20.947588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:20.947651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:20.980904  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:20.980984  346554 cri.go:89] found id: ""
	I1002 07:20:20.980999  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:20.981082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.984909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:20.984982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:21.020855  346554 cri.go:89] found id: ""
	I1002 07:20:21.020878  346554 logs.go:282] 0 containers: []
	W1002 07:20:21.020887  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:21.020896  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:21.020907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:21.117602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:21.117638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:21.192022  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:21.192043  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:21.192057  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:21.276022  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:21.276060  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:21.308782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:21.308822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:21.396093  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:21.396132  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:21.438867  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:21.438900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:21.463876  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:21.463906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:21.500802  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:21.500843  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:21.550471  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:21.550508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:21.590310  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:21.590349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.119676  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:24.131693  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:24.131783  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:24.163845  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.163870  346554 cri.go:89] found id: ""
	I1002 07:20:24.163879  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:24.163939  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.167667  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:24.167742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:24.195635  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.195658  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:24.195664  346554 cri.go:89] found id: ""
	I1002 07:20:24.195672  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:24.195731  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.199786  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.204099  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:24.204199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:24.233690  346554 cri.go:89] found id: ""
	I1002 07:20:24.233716  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.233726  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:24.233733  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:24.233790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:24.262505  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.262565  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.262586  346554 cri.go:89] found id: ""
	I1002 07:20:24.262614  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:24.262691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.266650  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.270417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:24.270511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:24.297687  346554 cri.go:89] found id: ""
	I1002 07:20:24.297713  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.297723  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:24.297729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:24.297790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:24.325175  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.325197  346554 cri.go:89] found id: ""
	I1002 07:20:24.325205  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:24.325284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.329310  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:24.329399  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:24.358432  346554 cri.go:89] found id: ""
	I1002 07:20:24.358458  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.358468  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:24.358477  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:24.358489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.418997  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:24.419034  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.449127  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:24.449155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:24.545814  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:24.545853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:24.561748  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:24.561777  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:24.632202  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:24.632226  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:24.632239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.662637  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:24.662668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:24.740789  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:24.740830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:24.773325  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:24.773357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.807399  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:24.807428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.853933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:24.853972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.396082  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:27.406955  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:27.407027  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:27.435147  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.435171  346554 cri.go:89] found id: ""
	I1002 07:20:27.435180  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:27.435238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.440669  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:27.440745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:27.467109  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.467176  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.467196  346554 cri.go:89] found id: ""
	I1002 07:20:27.467205  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:27.467275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.471217  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.474815  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:27.474888  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:27.503111  346554 cri.go:89] found id: ""
	I1002 07:20:27.503136  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.503145  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:27.503152  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:27.503222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:27.540213  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.540253  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.540260  346554 cri.go:89] found id: ""
	I1002 07:20:27.540276  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:27.540359  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.544590  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.548529  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:27.548605  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:27.577677  346554 cri.go:89] found id: ""
	I1002 07:20:27.577746  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.577772  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:27.577798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:27.577892  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:27.607310  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:27.607329  346554 cri.go:89] found id: ""
	I1002 07:20:27.607337  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:27.607393  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.611619  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:27.611690  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:27.647844  346554 cri.go:89] found id: ""
	I1002 07:20:27.647872  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.647882  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:27.647892  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:27.647905  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:27.723377  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:27.723400  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:27.723419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.750902  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:27.750932  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.804228  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:27.804267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.866989  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:27.867068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.895361  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:27.895393  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:28.004869  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:28.004912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:28.030605  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:28.030637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:28.090494  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:28.090531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:28.120915  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:28.120953  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:28.213702  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:28.213740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:30.746147  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:30.758010  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:30.758090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:30.789909  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:30.789936  346554 cri.go:89] found id: ""
	I1002 07:20:30.789945  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:30.790004  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.794321  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:30.794407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:30.823421  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:30.823445  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:30.823451  346554 cri.go:89] found id: ""
	I1002 07:20:30.823459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:30.823520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.827486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.831334  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:30.831416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:30.857968  346554 cri.go:89] found id: ""
	I1002 07:20:30.857996  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.858005  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:30.858012  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:30.858073  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:30.885972  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:30.885997  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:30.886002  346554 cri.go:89] found id: ""
	I1002 07:20:30.886010  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:30.886074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.891710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.897102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:30.897174  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:30.928917  346554 cri.go:89] found id: ""
	I1002 07:20:30.928944  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.928953  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:30.928960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:30.929079  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:30.957428  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:30.957456  346554 cri.go:89] found id: ""
	I1002 07:20:30.957465  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:30.957524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.961555  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:30.961638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:30.991607  346554 cri.go:89] found id: ""
	I1002 07:20:30.991644  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.991654  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:30.991664  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:30.991682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:31.034696  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:31.034732  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:31.095475  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:31.095521  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:31.124509  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:31.124543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:31.164950  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:31.164982  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:31.242438  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:31.242461  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:31.242475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:31.288791  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:31.288829  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:31.324555  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:31.324590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:31.358683  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:31.358775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:31.442957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:31.443002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:31.546184  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:31.546226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:34.062520  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:34.074346  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:34.074429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:34.104094  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.104116  346554 cri.go:89] found id: ""
	I1002 07:20:34.104124  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:34.104184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.108168  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:34.108242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:34.134780  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.134803  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.134808  346554 cri.go:89] found id: ""
	I1002 07:20:34.134816  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:34.134873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.140158  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.144631  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:34.144709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:34.171174  346554 cri.go:89] found id: ""
	I1002 07:20:34.171197  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.171209  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:34.171216  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:34.171279  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:34.201197  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.201265  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.201279  346554 cri.go:89] found id: ""
	I1002 07:20:34.201289  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:34.201358  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.205487  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.209274  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:34.209371  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:34.236797  346554 cri.go:89] found id: ""
	I1002 07:20:34.236823  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.236832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:34.236839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:34.236899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:34.268130  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.268153  346554 cri.go:89] found id: ""
	I1002 07:20:34.268163  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:34.268221  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.272288  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:34.272494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:34.303012  346554 cri.go:89] found id: ""
	I1002 07:20:34.303036  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.303046  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:34.303057  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:34.303069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.330987  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:34.331016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:34.409294  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:34.409332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:34.444890  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:34.444921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:34.529848  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:34.529873  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:34.529887  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.576746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:34.576783  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.617959  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:34.617994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.680077  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:34.680116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.709769  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:34.709801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.741411  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:34.741440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:34.841059  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:34.841096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.359292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:37.370946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:37.371032  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:37.399137  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.399162  346554 cri.go:89] found id: ""
	I1002 07:20:37.399171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:37.399230  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.403338  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:37.403412  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:37.430753  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.430777  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.430782  346554 cri.go:89] found id: ""
	I1002 07:20:37.430790  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:37.430846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.434756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.440208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:37.440282  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:37.466624  346554 cri.go:89] found id: ""
	I1002 07:20:37.466708  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.466741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:37.466763  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:37.466859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:37.494022  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.494043  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:37.494049  346554 cri.go:89] found id: ""
	I1002 07:20:37.494057  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:37.494137  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.498098  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.502412  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:37.502500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:37.535920  346554 cri.go:89] found id: ""
	I1002 07:20:37.535947  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.535956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:37.535963  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:37.536022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:37.562970  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.562994  346554 cri.go:89] found id: ""
	I1002 07:20:37.563004  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:37.563062  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.567000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:37.567077  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:37.595796  346554 cri.go:89] found id: ""
	I1002 07:20:37.595823  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.595832  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:37.595842  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:37.595875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.622318  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:37.622347  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:37.698567  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:37.698606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:37.730294  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:37.730323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.746780  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:37.746819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.774051  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:37.774082  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.842657  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:37.842692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.879058  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:37.879101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.958213  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:37.958255  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:38.066523  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:38.066564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:38.140589  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:38.140614  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:38.140628  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.668101  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:40.680533  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:40.680613  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:40.709182  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.709201  346554 cri.go:89] found id: ""
	I1002 07:20:40.709217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:40.709275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.714063  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:40.714131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:40.741940  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:40.741960  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:40.741965  346554 cri.go:89] found id: ""
	I1002 07:20:40.741972  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:40.742030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.746103  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.749819  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:40.749890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:40.779806  346554 cri.go:89] found id: ""
	I1002 07:20:40.779869  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.779893  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:40.779918  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:40.779999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:40.818846  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:40.818910  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.818930  346554 cri.go:89] found id: ""
	I1002 07:20:40.818956  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:40.819034  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.825049  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.829111  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:40.829255  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:40.857000  346554 cri.go:89] found id: ""
	I1002 07:20:40.857070  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.857101  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:40.857116  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:40.857204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:40.890997  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:40.891021  346554 cri.go:89] found id: ""
	I1002 07:20:40.891030  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:40.891120  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.902062  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:40.902188  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:40.931155  346554 cri.go:89] found id: ""
	I1002 07:20:40.931192  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.931201  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:40.931258  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:40.931282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.968238  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:40.968267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:41.004537  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:41.004577  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:41.077656  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:41.077693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:41.110709  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:41.110738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:41.146808  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:41.146839  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:41.218315  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:41.218395  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:41.218476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:41.270106  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:41.270141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:41.300977  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:41.301007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:41.385349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:41.385387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:41.485614  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:41.485658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.002362  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:44.017480  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:44.017558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:44.055626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.055653  346554 cri.go:89] found id: ""
	I1002 07:20:44.055662  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:44.055736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.059917  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:44.059997  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:44.097033  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:44.097067  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.097072  346554 cri.go:89] found id: ""
	I1002 07:20:44.097079  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:44.097147  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.101257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.105790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:44.105890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:44.134184  346554 cri.go:89] found id: ""
	I1002 07:20:44.134213  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.134222  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:44.134229  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:44.134316  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:44.172910  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.172972  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.172992  346554 cri.go:89] found id: ""
	I1002 07:20:44.173019  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:44.173087  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.177020  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.181101  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:44.181189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:44.210050  346554 cri.go:89] found id: ""
	I1002 07:20:44.210072  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.210081  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:44.210088  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:44.210148  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:44.236942  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.236966  346554 cri.go:89] found id: ""
	I1002 07:20:44.236975  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:44.237032  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.240886  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:44.240968  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:44.267437  346554 cri.go:89] found id: ""
	I1002 07:20:44.267471  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.267482  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:44.267498  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:44.267522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.311617  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:44.311650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.371464  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:44.371502  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.401657  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:44.401685  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.429428  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:44.429458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.457332  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:44.457370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:44.542400  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:44.542441  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:44.576729  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:44.576808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:44.671950  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:44.671991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.688074  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:44.688102  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:44.772308  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:44.772331  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:44.772344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.326275  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:47.337461  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:47.337588  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:47.370813  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.370885  346554 cri.go:89] found id: ""
	I1002 07:20:47.370909  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:47.370985  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.375983  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:47.376102  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:47.408952  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.409021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.409046  346554 cri.go:89] found id: ""
	I1002 07:20:47.409075  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:47.409142  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.412894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.416604  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:47.416678  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:47.443724  346554 cri.go:89] found id: ""
	I1002 07:20:47.443746  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.443755  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:47.443761  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:47.443825  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:47.472814  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.472835  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.472840  346554 cri.go:89] found id: ""
	I1002 07:20:47.472848  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:47.472910  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.476853  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.481052  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:47.481125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:47.527292  346554 cri.go:89] found id: ""
	I1002 07:20:47.527316  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.527325  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:47.527331  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:47.527396  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:47.557465  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.557493  346554 cri.go:89] found id: ""
	I1002 07:20:47.557502  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:47.557573  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.561605  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:47.561776  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:47.592217  346554 cri.go:89] found id: ""
	I1002 07:20:47.592251  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.592261  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:47.592270  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:47.592282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:47.609667  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:47.609697  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.670961  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:47.670999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.701512  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:47.701543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.730463  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:47.730493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:47.813379  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:47.813403  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:47.813417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.839632  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:47.839663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.890767  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:47.890807  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.931484  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:47.931519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:48.013592  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:48.013683  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:48.048341  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:48.048371  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:50.660679  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:50.672098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:50.672208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:50.698977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.699002  346554 cri.go:89] found id: ""
	I1002 07:20:50.699012  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:50.699155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.703120  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:50.703197  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:50.731004  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:50.731030  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.731035  346554 cri.go:89] found id: ""
	I1002 07:20:50.731043  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:50.731134  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.735170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.739036  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:50.739228  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:50.765233  346554 cri.go:89] found id: ""
	I1002 07:20:50.765257  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.765267  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:50.765276  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:50.765337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:50.798825  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:50.798846  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:50.798851  346554 cri.go:89] found id: ""
	I1002 07:20:50.798858  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:50.798922  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.803023  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.806604  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:50.806684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:50.834561  346554 cri.go:89] found id: ""
	I1002 07:20:50.834595  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.834605  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:50.834612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:50.834685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:50.862616  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:50.862640  346554 cri.go:89] found id: ""
	I1002 07:20:50.862649  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:50.862719  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.866512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:50.866591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:50.894801  346554 cri.go:89] found id: ""
	I1002 07:20:50.894874  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.894898  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:50.894927  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:50.894970  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.922014  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:50.922093  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.963158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:50.963238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:51.041253  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:51.041298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:51.078068  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:51.078373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:51.109345  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:51.109379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:51.143553  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:51.143586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:51.160251  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:51.160287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:51.232331  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:51.232357  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:51.232370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:51.284859  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:51.284891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:51.366726  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:51.366764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:53.965349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:53.977241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:53.977365  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:54.007342  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.007370  346554 cri.go:89] found id: ""
	I1002 07:20:54.007379  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:54.007452  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.014154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:54.014243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:54.042738  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.042761  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.042767  346554 cri.go:89] found id: ""
	I1002 07:20:54.042787  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:54.042849  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.047324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.052426  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:54.052514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:54.092137  346554 cri.go:89] found id: ""
	I1002 07:20:54.092162  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.092171  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:54.092177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:54.092245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:54.123873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.123895  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.123900  346554 cri.go:89] found id: ""
	I1002 07:20:54.123908  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:54.123966  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.128307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.132643  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:54.132764  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:54.167072  346554 cri.go:89] found id: ""
	I1002 07:20:54.167173  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.167197  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:54.167223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:54.167317  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:54.201096  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.201124  346554 cri.go:89] found id: ""
	I1002 07:20:54.201133  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:54.201192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.205200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:54.205319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:54.232346  346554 cri.go:89] found id: ""
	I1002 07:20:54.232375  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.232384  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:54.232394  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:54.232424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:54.307053  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:54.307076  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:54.307120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.339765  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:54.339797  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.389419  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:54.389463  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.427898  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:54.427934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.459945  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:54.459979  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.495013  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:54.495049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:54.593488  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:54.593523  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:54.699166  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:54.699248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:54.715185  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:54.715217  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.790047  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:54.790081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.332703  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:57.343440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:57.343508  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:57.371159  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:57.371224  346554 cri.go:89] found id: ""
	I1002 07:20:57.371248  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:57.371325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.376379  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:57.376455  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:57.403394  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:57.403417  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:57.403423  346554 cri.go:89] found id: ""
	I1002 07:20:57.403431  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:57.403486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.407238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.410942  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:57.411033  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:57.438995  346554 cri.go:89] found id: ""
	I1002 07:20:57.439020  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.439029  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:57.439036  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:57.439133  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:57.471614  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.471639  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:57.471644  346554 cri.go:89] found id: ""
	I1002 07:20:57.471656  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:57.471714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.475670  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.479817  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:57.479927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:57.514129  346554 cri.go:89] found id: ""
	I1002 07:20:57.514152  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.514160  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:57.514166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:57.514229  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:57.540930  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.540954  346554 cri.go:89] found id: ""
	I1002 07:20:57.540963  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:57.541019  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.545166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:57.545246  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:57.580607  346554 cri.go:89] found id: ""
	I1002 07:20:57.580633  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.580643  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:57.580653  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:57.580682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:57.662349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:57.662389  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:57.761863  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:57.761900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.830325  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:57.830366  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.856569  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:57.856598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.888135  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:57.888164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:57.906242  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:57.906270  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:57.976993  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:57.977018  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:57.977033  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:58.011287  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:58.011323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:58.063746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:58.063782  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:58.114504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:58.114539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.655161  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:00.666760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:00.666847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:00.699194  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:00.699218  346554 cri.go:89] found id: ""
	I1002 07:21:00.699227  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:00.699283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.703475  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:00.703551  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:00.730837  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:00.730862  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:00.730867  346554 cri.go:89] found id: ""
	I1002 07:21:00.730874  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:00.730933  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.734900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.738704  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:00.738777  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:00.765809  346554 cri.go:89] found id: ""
	I1002 07:21:00.765832  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.765841  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:00.765847  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:00.765903  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:00.806888  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:00.806911  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.806916  346554 cri.go:89] found id: ""
	I1002 07:21:00.806924  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:00.806982  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.810980  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.815454  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:00.815527  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:00.843377  346554 cri.go:89] found id: ""
	I1002 07:21:00.843403  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.843413  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:00.843419  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:00.843480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:00.870064  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:00.870084  346554 cri.go:89] found id: ""
	I1002 07:21:00.870094  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:00.870150  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.874067  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:00.874142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:00.912375  346554 cri.go:89] found id: ""
	I1002 07:21:00.912400  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.912409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:00.912419  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:00.912437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:01.010660  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:01.010703  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:01.027564  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:01.027589  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:01.108980  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:01.109003  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:01.109017  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:01.140899  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:01.140925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:01.201677  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:01.201719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:01.249485  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:01.249516  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:01.310648  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:01.310682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:01.339591  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:01.339668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:01.368293  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:01.368363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:01.451526  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:01.451565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:03.985004  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:03.995665  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:03.995732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:04.038756  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.038786  346554 cri.go:89] found id: ""
	I1002 07:21:04.038796  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:04.038863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.042734  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:04.042813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:04.080960  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.080984  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.080990  346554 cri.go:89] found id: ""
	I1002 07:21:04.080998  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:04.081055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.085045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.088904  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:04.088984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:04.116470  346554 cri.go:89] found id: ""
	I1002 07:21:04.116495  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.116504  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:04.116511  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:04.116568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:04.143301  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.143324  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:04.143330  346554 cri.go:89] found id: ""
	I1002 07:21:04.143336  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:04.143392  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.149220  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.156754  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:04.156875  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:04.186088  346554 cri.go:89] found id: ""
	I1002 07:21:04.186115  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.186125  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:04.186131  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:04.186222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:04.213953  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.213978  346554 cri.go:89] found id: ""
	I1002 07:21:04.213987  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:04.214074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.220236  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:04.220339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:04.249797  346554 cri.go:89] found id: ""
	I1002 07:21:04.249825  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.249834  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:04.249876  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:04.249893  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:04.334427  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:04.334464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:04.365264  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:04.365294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:04.467641  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:04.467693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.495501  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:04.495532  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.553841  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:04.553879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.590884  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:04.590912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.618124  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:04.618157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:04.634781  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:04.634812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:04.712412  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:04.712440  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:04.712458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.772367  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:04.772405  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.313327  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:07.324335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:07.324410  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:07.352343  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.352367  346554 cri.go:89] found id: ""
	I1002 07:21:07.352376  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:07.352456  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.356634  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:07.356705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:07.384754  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.384778  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.384783  346554 cri.go:89] found id: ""
	I1002 07:21:07.384791  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:07.384871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.388840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.392572  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:07.392672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:07.418573  346554 cri.go:89] found id: ""
	I1002 07:21:07.418605  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.418615  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:07.418622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:07.418681  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:07.450415  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:07.450439  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.450445  346554 cri.go:89] found id: ""
	I1002 07:21:07.450466  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:07.450529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.454971  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.459463  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:07.459539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:07.488692  346554 cri.go:89] found id: ""
	I1002 07:21:07.488722  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.488730  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:07.488737  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:07.488799  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:07.520325  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.520350  346554 cri.go:89] found id: ""
	I1002 07:21:07.520359  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:07.520421  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.524256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:07.524330  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:07.549519  346554 cri.go:89] found id: ""
	I1002 07:21:07.549540  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.549548  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:07.549558  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:07.549569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:07.643274  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:07.643315  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:07.716156  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:07.716179  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:07.716195  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.743950  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:07.743980  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:07.830226  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:07.830266  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:07.847230  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:07.847260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.875839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:07.875908  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.937408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:07.937448  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.974391  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:07.974428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:08.044504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:08.044544  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:08.085844  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:08.085875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:10.619391  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:10.631035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:10.631208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:10.664959  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:10.664983  346554 cri.go:89] found id: ""
	I1002 07:21:10.664992  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:10.665070  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.668812  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:10.668884  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:10.695400  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:10.695424  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:10.695430  346554 cri.go:89] found id: ""
	I1002 07:21:10.695438  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:10.695526  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.699317  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.703430  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:10.703524  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:10.728859  346554 cri.go:89] found id: ""
	I1002 07:21:10.728883  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.728892  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:10.728898  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:10.728974  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:10.754882  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:10.754905  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.754911  346554 cri.go:89] found id: ""
	I1002 07:21:10.754918  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:10.754984  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.758686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.762139  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:10.762248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:10.787999  346554 cri.go:89] found id: ""
	I1002 07:21:10.788067  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.788092  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:10.788115  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:10.788204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:10.814729  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:10.814803  346554 cri.go:89] found id: ""
	I1002 07:21:10.814825  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:10.814914  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.818388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:10.818483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:10.845398  346554 cri.go:89] found id: ""
	I1002 07:21:10.845424  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.845433  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:10.845443  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:10.845482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.873199  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:10.873225  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:10.951572  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:10.951609  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:11.051035  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:11.051118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:11.130878  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:11.130909  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:11.130924  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:11.156885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:11.156920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:11.211573  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:11.211615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:11.272703  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:11.272742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:11.301304  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:11.301336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:11.342833  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:11.342861  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:11.360176  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:11.360204  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.902061  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:13.915871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:13.915935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:13.954412  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:13.954439  346554 cri.go:89] found id: ""
	I1002 07:21:13.954448  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:13.954513  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.959571  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:13.959655  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:13.994709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:13.994729  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.994735  346554 cri.go:89] found id: ""
	I1002 07:21:13.994743  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:13.994797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.999427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.003663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:14.003749  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:14.042653  346554 cri.go:89] found id: ""
	I1002 07:21:14.042680  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.042690  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:14.042696  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:14.042757  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:14.087595  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.087615  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.087620  346554 cri.go:89] found id: ""
	I1002 07:21:14.087628  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:14.087688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.092427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.096855  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:14.096920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:14.126816  346554 cri.go:89] found id: ""
	I1002 07:21:14.126843  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.126852  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:14.126858  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:14.126918  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:14.155318  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:14.155339  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.155344  346554 cri.go:89] found id: ""
	I1002 07:21:14.155351  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:14.155407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.159934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.164569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:14.164634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:14.209412  346554 cri.go:89] found id: ""
	I1002 07:21:14.209437  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.209449  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:14.209459  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:14.209471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:14.225995  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:14.226022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:14.263998  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:14.264027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:14.360121  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:14.360159  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:14.407199  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:14.407234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.434782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:14.434814  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:14.521080  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:14.521121  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:14.593104  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:14.593134  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:14.699269  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:14.699308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:14.786512  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:14.786535  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:14.786548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.869065  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:14.869109  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.900362  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:14.900454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.430222  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:17.442136  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:17.442212  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:17.468618  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:17.468642  346554 cri.go:89] found id: ""
	I1002 07:21:17.468664  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:17.468722  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.472407  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:17.472483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:17.500441  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:17.500462  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.500468  346554 cri.go:89] found id: ""
	I1002 07:21:17.500475  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:17.500534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.504574  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.511111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:17.511190  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:17.539180  346554 cri.go:89] found id: ""
	I1002 07:21:17.539208  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.539217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:17.539224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:17.539283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:17.567616  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.567641  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:17.567647  346554 cri.go:89] found id: ""
	I1002 07:21:17.567654  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:17.567710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.571727  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.575519  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:17.575603  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:17.601045  346554 cri.go:89] found id: ""
	I1002 07:21:17.601070  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.601079  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:17.601086  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:17.601143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:17.628358  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.628379  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:17.628384  346554 cri.go:89] found id: ""
	I1002 07:21:17.628391  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:17.628479  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.632534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.636208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:17.636286  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:17.662364  346554 cri.go:89] found id: ""
	I1002 07:21:17.662389  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.662398  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:17.662408  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:17.662419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:17.756609  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:17.756643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:17.772784  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:17.772821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:17.854603  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:17.854625  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:17.854639  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.890480  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:17.890513  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.955720  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:17.955755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.986877  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:17.986906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:18.065618  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:18.065659  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:18.111257  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:18.111287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:18.141121  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:18.141151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:18.202491  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:18.202530  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:18.232094  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:18.232124  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.762758  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:20.773630  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:20.773708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:20.806503  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:20.806533  346554 cri.go:89] found id: ""
	I1002 07:21:20.806542  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:20.806599  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.810265  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:20.810338  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:20.839055  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:20.839105  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:20.839111  346554 cri.go:89] found id: ""
	I1002 07:21:20.839119  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:20.839176  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.843029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.846663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:20.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:20.875148  346554 cri.go:89] found id: ""
	I1002 07:21:20.875173  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.875183  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:20.875190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:20.875249  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:20.907677  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:20.907701  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:20.907707  346554 cri.go:89] found id: ""
	I1002 07:21:20.907715  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:20.907772  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.911686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.915632  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:20.915707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:20.941873  346554 cri.go:89] found id: ""
	I1002 07:21:20.941899  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.941908  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:20.941915  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:20.941975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:20.973490  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:20.973515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.973521  346554 cri.go:89] found id: ""
	I1002 07:21:20.973530  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:20.973585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.977414  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.981138  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:20.981213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:21.013505  346554 cri.go:89] found id: ""
	I1002 07:21:21.013533  346554 logs.go:282] 0 containers: []
	W1002 07:21:21.013543  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:21.013553  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:21.013565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:21.047930  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:21.047959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:21.144461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:21.144498  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:21.218444  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:21.218469  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:21.218482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:21.244979  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:21.245010  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:21.273907  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:21.273940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:21.304310  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:21.304341  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:21.383311  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:21.383390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:21.418944  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:21.418976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:21.437126  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:21.437154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:21.499338  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:21.499373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:21.541388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:21.541424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.103318  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:24.114524  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:24.114645  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:24.142263  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.142286  346554 cri.go:89] found id: ""
	I1002 07:21:24.142295  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:24.142357  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.146924  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:24.146998  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:24.174920  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.174945  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.174950  346554 cri.go:89] found id: ""
	I1002 07:21:24.174958  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:24.175015  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.179961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.183781  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:24.183859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:24.213946  346554 cri.go:89] found id: ""
	I1002 07:21:24.213969  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.213978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:24.213985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:24.214044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:24.240875  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.240898  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.240903  346554 cri.go:89] found id: ""
	I1002 07:21:24.240910  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:24.240967  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.244817  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.248504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:24.248601  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:24.277554  346554 cri.go:89] found id: ""
	I1002 07:21:24.277579  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.277588  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:24.277595  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:24.277675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:24.308411  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.308507  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.308518  346554 cri.go:89] found id: ""
	I1002 07:21:24.308526  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:24.308585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.312514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.316209  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:24.316322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:24.352013  346554 cri.go:89] found id: ""
	I1002 07:21:24.352037  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.352047  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:24.352057  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:24.352070  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.392888  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:24.392926  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.422136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:24.422162  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:24.522148  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:24.522189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:24.559761  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:24.559789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:24.635577  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:24.635658  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:24.635688  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.664008  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:24.664038  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.716205  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:24.716243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.776422  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:24.776465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.812576  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:24.812606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.850011  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:24.850051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:24.957619  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:24.957658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.474346  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:27.486924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:27.486999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:27.527387  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:27.527411  346554 cri.go:89] found id: ""
	I1002 07:21:27.527419  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:27.527481  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.531347  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:27.531425  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:27.557184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:27.557209  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.557216  346554 cri.go:89] found id: ""
	I1002 07:21:27.557226  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:27.557285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.561185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.564887  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:27.564964  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:27.593958  346554 cri.go:89] found id: ""
	I1002 07:21:27.593984  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.593993  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:27.594000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:27.594070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:27.624297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:27.624321  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:27.624325  346554 cri.go:89] found id: ""
	I1002 07:21:27.624332  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:27.624390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.628548  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.632313  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:27.632401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:27.658827  346554 cri.go:89] found id: ""
	I1002 07:21:27.658850  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.658858  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:27.658876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:27.658942  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:27.687346  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.687422  346554 cri.go:89] found id: ""
	I1002 07:21:27.687438  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:27.687516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.691438  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:27.691563  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:27.716933  346554 cri.go:89] found id: ""
	I1002 07:21:27.716959  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.716969  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:27.716979  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:27.717019  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:27.817783  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:27.817831  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.857490  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:27.857525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.885125  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:27.885157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:27.918095  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:27.918133  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.933988  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:27.934018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:28.004686  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:28.004719  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:28.004734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:28.034260  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:28.034287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:28.093230  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:28.093269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:28.164138  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:28.164177  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:28.195157  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:28.195188  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:30.778568  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:30.789765  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:30.789833  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:30.825174  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:30.825194  346554 cri.go:89] found id: ""
	I1002 07:21:30.825202  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:30.825257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.829729  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:30.829796  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:30.856611  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:30.856632  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:30.856637  346554 cri.go:89] found id: ""
	I1002 07:21:30.856644  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:30.856701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.860561  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.864279  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:30.864353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:30.891192  346554 cri.go:89] found id: ""
	I1002 07:21:30.891217  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.891257  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:30.891269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:30.891353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:30.918873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:30.918892  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:30.918897  346554 cri.go:89] found id: ""
	I1002 07:21:30.918904  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:30.918965  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.922949  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.926830  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:30.926928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:30.953030  346554 cri.go:89] found id: ""
	I1002 07:21:30.953059  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.953068  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:30.953074  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:30.953131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:30.980458  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:30.980480  346554 cri.go:89] found id: ""
	I1002 07:21:30.980489  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:30.980547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.984323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:30.984450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:31.026334  346554 cri.go:89] found id: ""
	I1002 07:21:31.026360  346554 logs.go:282] 0 containers: []
	W1002 07:21:31.026370  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:31.026380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:31.026416  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:31.058391  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:31.058420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:31.116004  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:31.116040  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:31.151060  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:31.151099  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:31.231368  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:31.231406  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:31.332798  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:31.332835  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:31.413678  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:31.413705  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:31.413717  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:31.461265  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:31.461299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:31.534946  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:31.534986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:31.562600  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:31.562629  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:31.592876  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:31.592906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.110078  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:34.121201  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:34.121271  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:34.148533  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.148554  346554 cri.go:89] found id: ""
	I1002 07:21:34.148562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:34.148621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.152503  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:34.152585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:34.181027  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.181050  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.181056  346554 cri.go:89] found id: ""
	I1002 07:21:34.181063  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:34.181117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.185002  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.189485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:34.189560  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:34.215599  346554 cri.go:89] found id: ""
	I1002 07:21:34.215625  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.215634  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:34.215641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:34.215699  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:34.241734  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.241763  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.241768  346554 cri.go:89] found id: ""
	I1002 07:21:34.241776  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:34.241832  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.245545  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.248974  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:34.249050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:34.276023  346554 cri.go:89] found id: ""
	I1002 07:21:34.276049  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.276059  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:34.276072  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:34.276132  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:34.303384  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.303407  346554 cri.go:89] found id: ""
	I1002 07:21:34.303415  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:34.303472  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.307469  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:34.307539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:34.340234  346554 cri.go:89] found id: ""
	I1002 07:21:34.340261  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.340271  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:34.340281  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:34.340293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.356522  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:34.356550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.394796  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:34.394825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.443502  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:34.443538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.474055  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:34.474081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:34.555556  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:34.555637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:34.658066  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:34.658101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:34.733631  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:34.733651  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:34.733665  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.784032  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:34.784068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.847736  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:34.847771  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.875075  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:34.875172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:37.408950  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:37.421164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:37.421273  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:37.452410  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.452439  346554 cri.go:89] found id: ""
	I1002 07:21:37.452449  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:37.452505  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.456325  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:37.456445  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:37.486317  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.486340  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:37.486346  346554 cri.go:89] found id: ""
	I1002 07:21:37.486353  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:37.486451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.490342  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.494027  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:37.494104  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:37.527183  346554 cri.go:89] found id: ""
	I1002 07:21:37.527257  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.527281  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:37.527305  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:37.527403  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:37.553164  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:37.553189  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:37.553194  346554 cri.go:89] found id: ""
	I1002 07:21:37.553202  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:37.553263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.557191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.560812  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:37.560909  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:37.592768  346554 cri.go:89] found id: ""
	I1002 07:21:37.592837  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.592861  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:37.592887  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:37.592973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:37.619244  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:37.619275  346554 cri.go:89] found id: ""
	I1002 07:21:37.619285  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:37.619382  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.622994  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:37.623067  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:37.654796  346554 cri.go:89] found id: ""
	I1002 07:21:37.654833  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.654843  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:37.654853  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:37.654864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:37.735865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:37.735903  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:37.829667  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:37.829705  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:37.906371  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:37.906396  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:37.906409  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.931859  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:37.931891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.982107  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:37.982141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:38.026363  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:38.026402  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:38.097347  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:38.097387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:38.129911  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:38.129940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:38.174203  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:38.174233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:38.192324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:38.192356  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.723244  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:40.733967  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:40.734044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:40.761160  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:40.761180  346554 cri.go:89] found id: ""
	I1002 07:21:40.761196  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:40.761257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.764997  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:40.765082  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:40.793331  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:40.793357  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:40.793376  346554 cri.go:89] found id: ""
	I1002 07:21:40.793385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:40.793441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.799890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.803764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:40.803836  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:40.834660  346554 cri.go:89] found id: ""
	I1002 07:21:40.834686  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.834696  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:40.834702  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:40.834765  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:40.866063  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:40.866087  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:40.866093  346554 cri.go:89] found id: ""
	I1002 07:21:40.866103  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:40.866168  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.870407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.873946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:40.874058  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:40.908301  346554 cri.go:89] found id: ""
	I1002 07:21:40.908367  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.908391  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:40.908417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:40.908494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:40.937896  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.937966  346554 cri.go:89] found id: ""
	I1002 07:21:40.937990  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:40.938080  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.941880  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:40.941952  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:40.967147  346554 cri.go:89] found id: ""
	I1002 07:21:40.967174  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.967190  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:40.967226  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:40.967238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:41.061039  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:41.061077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:41.080254  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:41.080282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:41.108521  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:41.108547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:41.162117  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:41.162154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:41.233238  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:41.233276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:41.260363  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:41.260392  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:41.333767  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:41.333840  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:41.333863  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:41.370518  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:41.370556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:41.399620  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:41.399646  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:41.485257  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:41.485299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.031564  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:44.043423  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:44.043501  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:44.077366  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.077391  346554 cri.go:89] found id: ""
	I1002 07:21:44.077400  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:44.077473  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.082216  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:44.082297  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:44.114495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.114564  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.114585  346554 cri.go:89] found id: ""
	I1002 07:21:44.114612  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:44.114701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.118699  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.122876  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:44.122955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:44.161976  346554 cri.go:89] found id: ""
	I1002 07:21:44.162003  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.162015  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:44.162021  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:44.162120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:44.190658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.190682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.190688  346554 cri.go:89] found id: ""
	I1002 07:21:44.190695  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:44.190800  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.194562  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.198424  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:44.198514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:44.224096  346554 cri.go:89] found id: ""
	I1002 07:21:44.224158  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.224181  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:44.224207  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:44.224284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:44.251545  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.251569  346554 cri.go:89] found id: ""
	I1002 07:21:44.251581  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:44.251639  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.255354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:44.255428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:44.282373  346554 cri.go:89] found id: ""
	I1002 07:21:44.282400  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.282409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:44.282419  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:44.282431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.308028  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:44.308062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.363685  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:44.363723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.396318  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:44.396349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.442337  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:44.442370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:44.546740  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:44.546778  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:44.562701  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:44.562734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:44.638865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:44.638901  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:44.638934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.675050  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:44.675117  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.759066  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:44.759108  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.789536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:44.789569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:47.372747  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:47.384470  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:47.384538  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:47.411456  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.411476  346554 cri.go:89] found id: ""
	I1002 07:21:47.411484  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:47.411538  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.415979  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:47.416052  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:47.441980  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.442000  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.442005  346554 cri.go:89] found id: ""
	I1002 07:21:47.442012  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:47.442071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.446178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.449820  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:47.449889  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:47.480516  346554 cri.go:89] found id: ""
	I1002 07:21:47.480597  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.480614  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:47.480622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:47.480700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:47.512233  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.512299  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.512321  346554 cri.go:89] found id: ""
	I1002 07:21:47.512347  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:47.512447  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.517986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.522484  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:47.522599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:47.554391  346554 cri.go:89] found id: ""
	I1002 07:21:47.554459  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.554483  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:47.554509  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:47.554608  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:47.581519  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.581586  346554 cri.go:89] found id: ""
	I1002 07:21:47.581608  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:47.581710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.585885  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:47.585999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:47.615242  346554 cri.go:89] found id: ""
	I1002 07:21:47.615272  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.615281  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:47.615291  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:47.615322  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:47.635364  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:47.635394  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:47.712651  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:47.712678  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:47.712694  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.743506  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:47.743536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.811148  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:47.811227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.870291  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:47.870324  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.910224  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:47.910257  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.939069  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:47.939155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.964969  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:47.965008  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:48.043117  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:48.043158  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:48.088315  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:48.088344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:50.689757  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:50.700824  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:50.700893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:50.728143  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:50.728166  346554 cri.go:89] found id: ""
	I1002 07:21:50.728175  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:50.728244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.732333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:50.732406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:50.757855  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:50.757880  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:50.757886  346554 cri.go:89] found id: ""
	I1002 07:21:50.757905  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:50.757972  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.762029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.765976  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:50.766050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:50.799256  346554 cri.go:89] found id: ""
	I1002 07:21:50.799278  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.799287  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:50.799293  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:50.799360  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:50.831950  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:50.831974  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:50.831981  346554 cri.go:89] found id: ""
	I1002 07:21:50.831988  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:50.832045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.836319  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.840585  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:50.840668  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:50.870390  346554 cri.go:89] found id: ""
	I1002 07:21:50.870416  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.870428  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:50.870436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:50.870502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:50.900076  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:50.900103  346554 cri.go:89] found id: ""
	I1002 07:21:50.900112  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:50.900193  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.904363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:50.904461  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:50.932728  346554 cri.go:89] found id: ""
	I1002 07:21:50.932755  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.932775  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:50.932786  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:50.932798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:51.001280  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:51.001310  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:51.001326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:51.032692  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:51.032721  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:51.086523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:51.086563  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:51.151924  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:51.151959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:51.181936  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:51.181965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:51.209313  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:51.209340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:51.246072  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:51.246103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:51.328956  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:51.328991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:51.362658  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:51.362692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:51.461576  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:51.461615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:53.981504  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:53.992767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:53.992841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:54.027324  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.027347  346554 cri.go:89] found id: ""
	I1002 07:21:54.027356  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:54.027422  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.031946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:54.032021  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:54.059889  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.059911  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.059916  346554 cri.go:89] found id: ""
	I1002 07:21:54.059924  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:54.059983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.064071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.068437  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:54.068516  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:54.100879  346554 cri.go:89] found id: ""
	I1002 07:21:54.100906  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.100917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:54.100923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:54.101019  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:54.127769  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.127792  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.127798  346554 cri.go:89] found id: ""
	I1002 07:21:54.127806  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:54.127871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.131837  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.135428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:54.135507  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:54.163909  346554 cri.go:89] found id: ""
	I1002 07:21:54.163934  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.163943  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:54.163950  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:54.164008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:54.195746  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.195778  346554 cri.go:89] found id: ""
	I1002 07:21:54.195787  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:54.195846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.200638  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:54.200733  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:54.228414  346554 cri.go:89] found id: ""
	I1002 07:21:54.228492  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.228518  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:54.228534  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:54.228548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:54.261854  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:54.261884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:54.337793  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:54.337814  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:54.337828  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.374142  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:54.374176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.444394  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:54.444430  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.487047  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:54.487074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.531639  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:54.531667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:54.639157  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:54.639196  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:54.655755  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:54.655784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.685950  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:54.685978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.753837  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:54.753879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.341138  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:57.351729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:57.351806  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:57.383937  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.383962  346554 cri.go:89] found id: ""
	I1002 07:21:57.383970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:57.384030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.387697  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:57.387774  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:57.413348  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:57.413372  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.413377  346554 cri.go:89] found id: ""
	I1002 07:21:57.413385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:57.413451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.417397  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.420826  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:57.420904  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:57.453888  346554 cri.go:89] found id: ""
	I1002 07:21:57.453913  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.453922  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:57.453928  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:57.453986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:57.483451  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:57.483472  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:57.483476  346554 cri.go:89] found id: ""
	I1002 07:21:57.483483  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:57.483541  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.487407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.490932  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:57.491034  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:57.526291  346554 cri.go:89] found id: ""
	I1002 07:21:57.526318  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.526327  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:57.526334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:57.526391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:57.554217  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.554297  346554 cri.go:89] found id: ""
	I1002 07:21:57.554320  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:57.554415  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.558417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:57.558494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:57.590610  346554 cri.go:89] found id: ""
	I1002 07:21:57.590632  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.590640  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:57.590649  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:57.590662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:57.686336  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:57.686376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.717511  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:57.717543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.754283  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:57.754326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.785227  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:57.785258  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.869305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:57.869342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:57.909139  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:57.909171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:57.926456  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:57.926487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:57.995639  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:57.995664  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:57.995679  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:58.058207  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:58.058248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:58.125241  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:58.125284  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.654876  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:00.665832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:00.665905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:00.693874  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.693939  346554 cri.go:89] found id: ""
	I1002 07:22:00.693962  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:00.694054  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.697859  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:00.697934  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:00.725245  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.725270  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:00.725276  346554 cri.go:89] found id: ""
	I1002 07:22:00.725284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:00.725364  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.729223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.732817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:00.732935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:00.758839  346554 cri.go:89] found id: ""
	I1002 07:22:00.758906  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.758929  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:00.758953  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:00.759039  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:00.799071  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:00.799149  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.799155  346554 cri.go:89] found id: ""
	I1002 07:22:00.799162  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:00.799234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.803167  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.806750  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:00.806845  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:00.839560  346554 cri.go:89] found id: ""
	I1002 07:22:00.839587  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.839596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:00.839602  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:00.839660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:00.870224  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:00.870255  346554 cri.go:89] found id: ""
	I1002 07:22:00.870263  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:00.870336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.874393  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:00.874495  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:00.912075  346554 cri.go:89] found id: ""
	I1002 07:22:00.912105  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.912114  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:00.912124  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:00.912136  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.937824  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:00.937853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.995416  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:00.995451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:01.066170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:01.066205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:01.097565  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:01.097596  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:01.177599  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:01.177641  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:01.279014  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:01.279051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:01.294984  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:01.295013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:01.367956  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:01.368020  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:01.368050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:01.410820  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:01.410865  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:01.438796  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:01.438821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:03.971937  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:03.983881  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:03.983958  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:04.015026  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.015047  346554 cri.go:89] found id: ""
	I1002 07:22:04.015055  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:04.015146  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.019432  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:04.019511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:04.047606  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.047638  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.047644  346554 cri.go:89] found id: ""
	I1002 07:22:04.047651  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:04.047716  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.052312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.055940  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:04.056013  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:04.084749  346554 cri.go:89] found id: ""
	I1002 07:22:04.084774  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.084784  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:04.084791  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:04.084858  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:04.115693  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.115718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:04.115724  346554 cri.go:89] found id: ""
	I1002 07:22:04.115732  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:04.115791  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.119451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.123387  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:04.123509  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:04.160601  346554 cri.go:89] found id: ""
	I1002 07:22:04.160634  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.160643  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:04.160650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:04.160709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:04.186914  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.186975  346554 cri.go:89] found id: ""
	I1002 07:22:04.187000  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:04.187074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.190897  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:04.190972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:04.217225  346554 cri.go:89] found id: ""
	I1002 07:22:04.217292  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.217306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:04.217320  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:04.217332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:04.248848  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:04.248876  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:04.265771  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:04.265801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:04.331344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:04.331380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:04.331395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.358729  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:04.358757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.416966  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:04.417007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.455261  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:04.455298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.483009  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:04.483037  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:04.563547  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:04.563585  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:04.668263  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:04.668301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.744129  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:04.744172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.275239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:07.285854  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:07.285925  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:07.312977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.312997  346554 cri.go:89] found id: ""
	I1002 07:22:07.313005  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:07.313060  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.316845  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:07.316920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:07.346852  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.346874  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.346879  346554 cri.go:89] found id: ""
	I1002 07:22:07.346887  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:07.346943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.350635  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.354162  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:07.354227  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:07.383691  346554 cri.go:89] found id: ""
	I1002 07:22:07.383716  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.383725  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:07.383732  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:07.383790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:07.412740  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.412762  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.412768  346554 cri.go:89] found id: ""
	I1002 07:22:07.412775  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:07.412874  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.416633  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.420294  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:07.420370  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:07.448452  346554 cri.go:89] found id: ""
	I1002 07:22:07.448481  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.448496  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:07.448503  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:07.448573  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:07.478691  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:07.478759  346554 cri.go:89] found id: ""
	I1002 07:22:07.478782  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:07.478877  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.484491  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:07.484566  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:07.526882  346554 cri.go:89] found id: ""
	I1002 07:22:07.526907  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.526916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:07.526926  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:07.526940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:07.543682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:07.543709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:07.622365  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:07.622386  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:07.622401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.688381  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:07.688417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.716317  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:07.716368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:07.765160  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:07.765187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:07.863442  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:07.863480  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.890947  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:07.890975  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.931413  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:07.931445  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.994034  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:07.994116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:08.029432  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:08.029459  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:10.612654  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:10.624226  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:10.624295  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:10.651797  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:10.651820  346554 cri.go:89] found id: ""
	I1002 07:22:10.651829  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:10.651887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.655778  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:10.655861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:10.682781  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:10.682804  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:10.682810  346554 cri.go:89] found id: ""
	I1002 07:22:10.682817  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:10.682873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.686610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.690176  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:10.690248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:10.716340  346554 cri.go:89] found id: ""
	I1002 07:22:10.716365  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.716374  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:10.716380  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:10.716450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:10.744916  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:10.744941  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:10.744947  346554 cri.go:89] found id: ""
	I1002 07:22:10.744954  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:10.745009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.748825  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.752367  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:10.752459  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:10.778426  346554 cri.go:89] found id: ""
	I1002 07:22:10.778491  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.778519  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:10.778545  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:10.778634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:10.816930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:10.816956  346554 cri.go:89] found id: ""
	I1002 07:22:10.816965  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:10.817021  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.820675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:10.820748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:10.848624  346554 cri.go:89] found id: ""
	I1002 07:22:10.848692  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.848716  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:10.848747  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:10.848784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:10.949146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:10.949183  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:10.966424  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:10.966503  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:11.050571  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:11.050590  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:11.050607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:11.096274  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:11.096305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:11.163795  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:11.163833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:11.198136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:11.198167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:11.281776  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:11.281815  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:11.314298  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:11.314329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:11.346046  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:11.346074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:11.401509  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:11.401546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:13.937437  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:13.948853  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:13.948931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:13.978524  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:13.978546  346554 cri.go:89] found id: ""
	I1002 07:22:13.978562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:13.978622  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:13.983904  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:13.984002  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:14.018404  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.018427  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.018432  346554 cri.go:89] found id: ""
	I1002 07:22:14.018441  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:14.018501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.022898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.027485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:14.027580  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:14.067189  346554 cri.go:89] found id: ""
	I1002 07:22:14.067277  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.067293  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:14.067301  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:14.067380  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:14.098843  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:14.098868  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.098874  346554 cri.go:89] found id: ""
	I1002 07:22:14.098882  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:14.098938  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.103497  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.107744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:14.107820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:14.136768  346554 cri.go:89] found id: ""
	I1002 07:22:14.136797  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.136807  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:14.136813  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:14.136880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:14.163984  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.164055  346554 cri.go:89] found id: ""
	I1002 07:22:14.164079  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:14.164165  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.168259  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:14.168337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:14.201762  346554 cri.go:89] found id: ""
	I1002 07:22:14.201789  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.201799  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:14.201809  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:14.201822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.228036  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:14.228067  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:14.305247  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:14.305286  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:14.417180  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:14.417216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:14.434371  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:14.434404  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.494496  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:14.494534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.530240  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:14.530274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:14.565285  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:14.565312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:14.656059  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:14.656082  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:14.656096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:14.684431  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:14.684465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.720953  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:14.720987  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.291251  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:17.303244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:17.303315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:17.330183  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.330208  346554 cri.go:89] found id: ""
	I1002 07:22:17.330217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:17.330281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.334207  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:17.334281  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:17.363238  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.363263  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.363269  346554 cri.go:89] found id: ""
	I1002 07:22:17.363276  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:17.363331  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.367005  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.370719  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:17.370792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:17.397991  346554 cri.go:89] found id: ""
	I1002 07:22:17.398016  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.398026  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:17.398032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:17.398092  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:17.431537  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.431562  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.431568  346554 cri.go:89] found id: ""
	I1002 07:22:17.431575  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:17.431631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.435774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.439628  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:17.439701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:17.470573  346554 cri.go:89] found id: ""
	I1002 07:22:17.470598  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.470614  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:17.470621  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:17.470689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:17.496787  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:17.496813  346554 cri.go:89] found id: ""
	I1002 07:22:17.496822  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:17.496879  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.500676  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:17.500809  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:17.528111  346554 cri.go:89] found id: ""
	I1002 07:22:17.528136  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.528145  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:17.528155  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:17.528167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:17.629228  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:17.629269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:17.719781  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:17.719804  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:17.719818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.791077  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:17.791176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.835873  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:17.835907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.865669  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:17.865698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:17.947809  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:17.947851  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:17.966021  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:17.966054  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.993388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:17.993419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:18.067826  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:18.067915  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:18.098854  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:18.098928  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:20.640412  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:20.654177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:20.654280  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:20.689110  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:20.689138  346554 cri.go:89] found id: ""
	I1002 07:22:20.689146  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:20.689210  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.692968  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:20.693043  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:20.726246  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.726271  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:20.726276  346554 cri.go:89] found id: ""
	I1002 07:22:20.726284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:20.726340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.730329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.734406  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:20.734503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:20.762306  346554 cri.go:89] found id: ""
	I1002 07:22:20.762332  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.762341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:20.762348  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:20.762406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:20.801345  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:20.801370  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:20.801375  346554 cri.go:89] found id: ""
	I1002 07:22:20.801383  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:20.801461  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.805572  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.809363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:20.809439  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:20.839370  346554 cri.go:89] found id: ""
	I1002 07:22:20.839396  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.839405  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:20.839411  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:20.839487  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:20.866883  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:20.866908  346554 cri.go:89] found id: ""
	I1002 07:22:20.866918  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:20.866994  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.871482  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:20.871602  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:20.915272  346554 cri.go:89] found id: ""
	I1002 07:22:20.915297  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.915306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:20.915334  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:20.915354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.969984  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:20.970023  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:21.008389  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:21.008426  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:21.097527  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:21.097564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:21.131052  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:21.131112  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:21.250056  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:21.250095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:21.266497  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:21.266528  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:21.336488  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:21.336517  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:21.336534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:21.365447  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:21.365477  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:21.432439  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:21.432517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:21.464158  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:21.464186  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:23.993684  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:24.012128  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:24.012344  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:24.041820  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.041844  346554 cri.go:89] found id: ""
	I1002 07:22:24.041853  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:24.041913  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.045939  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:24.046012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:24.080951  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.080971  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.080977  346554 cri.go:89] found id: ""
	I1002 07:22:24.080984  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:24.081042  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.086379  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.090878  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:24.090956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:24.118754  346554 cri.go:89] found id: ""
	I1002 07:22:24.118793  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.118803  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:24.118809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:24.118876  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:24.162937  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.162960  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.162967  346554 cri.go:89] found id: ""
	I1002 07:22:24.162975  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:24.163041  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.167416  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.171521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:24.171612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:24.198740  346554 cri.go:89] found id: ""
	I1002 07:22:24.198764  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.198774  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:24.198780  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:24.198849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:24.226586  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.226607  346554 cri.go:89] found id: ""
	I1002 07:22:24.226616  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:24.226676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.230625  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:24.230701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:24.258053  346554 cri.go:89] found id: ""
	I1002 07:22:24.258089  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.258100  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:24.258110  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:24.258122  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:24.357393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:24.357431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:24.375359  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:24.375390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.444675  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:24.444714  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.484227  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:24.484262  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.512674  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:24.512707  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:24.597691  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:24.597712  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:24.597728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.628466  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:24.628492  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.706367  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:24.706408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.737446  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:24.737475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:24.822997  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:24.823036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:27.355482  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:27.366566  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:27.366636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:27.394804  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:27.394828  346554 cri.go:89] found id: ""
	I1002 07:22:27.394837  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:27.394901  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.398931  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:27.399000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:27.425553  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.425576  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.425582  346554 cri.go:89] found id: ""
	I1002 07:22:27.425590  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:27.425651  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.429400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.433140  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:27.433237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:27.463605  346554 cri.go:89] found id: ""
	I1002 07:22:27.463626  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.463635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:27.463642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:27.463701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:27.493043  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.493074  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.493080  346554 cri.go:89] found id: ""
	I1002 07:22:27.493087  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:27.493145  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.497072  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.500729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:27.500805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:27.531993  346554 cri.go:89] found id: ""
	I1002 07:22:27.532021  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.532031  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:27.532037  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:27.532097  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:27.559232  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.559310  346554 cri.go:89] found id: ""
	I1002 07:22:27.559329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:27.559400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.563624  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:27.563744  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:27.593254  346554 cri.go:89] found id: ""
	I1002 07:22:27.593281  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.593302  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:27.593313  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:27.593328  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.622961  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:27.622992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:27.700292  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:27.700315  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:27.700329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.760790  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:27.760830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.800937  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:27.800976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.879230  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:27.879273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.910457  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:27.910561  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:27.998247  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:27.998287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:28.039823  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:28.039856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:28.148384  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:28.148472  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:28.170086  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:28.170114  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.702644  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:30.713672  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:30.713748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:30.742461  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.742484  346554 cri.go:89] found id: ""
	I1002 07:22:30.742493  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:30.742553  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.746359  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:30.746446  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:30.777229  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:30.777256  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:30.777261  346554 cri.go:89] found id: ""
	I1002 07:22:30.777269  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:30.777345  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.781661  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.785300  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:30.785373  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:30.812435  346554 cri.go:89] found id: ""
	I1002 07:22:30.812465  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.812474  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:30.812481  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:30.812558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:30.839730  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:30.839752  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:30.839758  346554 cri.go:89] found id: ""
	I1002 07:22:30.839765  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:30.839851  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.843582  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.847332  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:30.847414  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:30.877768  346554 cri.go:89] found id: ""
	I1002 07:22:30.877795  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.877804  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:30.877811  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:30.877919  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:30.906930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.906954  346554 cri.go:89] found id: ""
	I1002 07:22:30.906970  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:30.907050  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.911004  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:30.911153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:30.936781  346554 cri.go:89] found id: ""
	I1002 07:22:30.936817  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.936826  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:30.936836  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:30.936849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.963944  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:30.963978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:31.039393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:31.039431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:31.056356  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:31.056396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:31.086443  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:31.086483  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:31.129305  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:31.129342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:31.206518  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:31.206557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:31.246963  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:31.246992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:31.349345  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:31.349380  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:31.424210  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:31.424235  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:31.424247  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:31.494342  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:31.494381  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.028701  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:34.039883  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:34.039955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:34.082124  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.082149  346554 cri.go:89] found id: ""
	I1002 07:22:34.082158  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:34.082222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.086333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:34.086408  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:34.115537  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.115562  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.115568  346554 cri.go:89] found id: ""
	I1002 07:22:34.115575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:34.115632  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.119540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.123109  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:34.123181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:34.149943  346554 cri.go:89] found id: ""
	I1002 07:22:34.149969  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.149978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:34.149985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:34.150098  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:34.177023  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.177044  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.177051  346554 cri.go:89] found id: ""
	I1002 07:22:34.177060  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:34.177117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.180893  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.184341  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:34.184418  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:34.211353  346554 cri.go:89] found id: ""
	I1002 07:22:34.211377  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.211385  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:34.211391  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:34.211449  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:34.237574  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:34.237593  346554 cri.go:89] found id: ""
	I1002 07:22:34.237601  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:34.237659  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.241551  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:34.241626  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:34.272007  346554 cri.go:89] found id: ""
	I1002 07:22:34.272030  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.272039  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:34.272048  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:34.272059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:34.344503  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:34.344540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:34.378151  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:34.378181  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:34.479542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:34.479579  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:34.561912  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:34.561988  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:34.562009  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.627010  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:34.627046  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.675398  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:34.675431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.761258  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:34.761301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:34.783800  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:34.783847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.822817  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:34.822856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.855272  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:34.855298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.390316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:37.401208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:37.401285  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:37.428835  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:37.428857  346554 cri.go:89] found id: ""
	I1002 07:22:37.428864  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:37.428934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.433201  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:37.433276  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:37.461633  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.461664  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.461670  346554 cri.go:89] found id: ""
	I1002 07:22:37.461678  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:37.461736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.465629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.469272  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:37.469348  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:37.498524  346554 cri.go:89] found id: ""
	I1002 07:22:37.498551  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.498561  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:37.498567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:37.498627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:37.535431  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:37.535453  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.535458  346554 cri.go:89] found id: ""
	I1002 07:22:37.535465  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:37.535523  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.539518  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.543351  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:37.543429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:37.569817  346554 cri.go:89] found id: ""
	I1002 07:22:37.569886  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.569912  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:37.569938  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:37.570048  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:37.600094  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.600161  346554 cri.go:89] found id: ""
	I1002 07:22:37.600184  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:37.600279  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.604474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:37.604627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:37.635043  346554 cri.go:89] found id: ""
	I1002 07:22:37.635139  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.635164  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:37.635209  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:37.635241  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:37.652712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:37.652747  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:37.724304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:37.724327  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:37.724343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.778979  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:37.779018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.823368  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:37.823400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.852458  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:37.852487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:37.935415  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:37.935451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:38.032660  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:38.032698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:38.062211  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:38.062292  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:38.141041  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:38.141076  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:38.167504  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:38.167535  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:40.716529  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:40.727155  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:40.727237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:40.759650  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:40.759670  346554 cri.go:89] found id: ""
	I1002 07:22:40.759677  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:40.759739  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.763794  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:40.763891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:40.799428  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:40.799495  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:40.799505  346554 cri.go:89] found id: ""
	I1002 07:22:40.799513  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:40.799587  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.804441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.808181  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:40.808256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:40.839434  346554 cri.go:89] found id: ""
	I1002 07:22:40.839458  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.839466  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:40.839479  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:40.839540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:40.866347  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:40.866368  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:40.866373  346554 cri.go:89] found id: ""
	I1002 07:22:40.866380  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:40.866435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.870243  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.873802  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:40.873887  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:40.915472  346554 cri.go:89] found id: ""
	I1002 07:22:40.915499  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.915508  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:40.915515  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:40.915589  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:40.945530  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:40.945552  346554 cri.go:89] found id: ""
	I1002 07:22:40.945570  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:40.945629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.949410  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:40.949513  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:40.976546  346554 cri.go:89] found id: ""
	I1002 07:22:40.976589  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.976598  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:40.976608  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:40.976620  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:40.993923  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:40.993952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:41.069718  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:41.069746  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:41.069760  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:41.101275  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:41.101313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:41.185486  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:41.185522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:41.213391  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:41.213419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:41.286933  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:41.286973  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:41.325032  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:41.325063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:41.427475  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:41.427517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:41.507722  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:41.507762  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:41.553697  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:41.553731  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.083713  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:44.094946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:44.095050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:44.122939  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.122961  346554 cri.go:89] found id: ""
	I1002 07:22:44.122970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:44.123027  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.126926  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:44.127001  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:44.168228  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.168253  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.168259  346554 cri.go:89] found id: ""
	I1002 07:22:44.168267  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:44.168325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.172203  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.176051  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:44.176154  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:44.207518  346554 cri.go:89] found id: ""
	I1002 07:22:44.207545  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.207554  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:44.207560  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:44.207619  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:44.236177  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:44.236200  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.236206  346554 cri.go:89] found id: ""
	I1002 07:22:44.236214  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:44.236274  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.239868  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.243456  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:44.243575  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:44.269491  346554 cri.go:89] found id: ""
	I1002 07:22:44.269568  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.269596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:44.269612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:44.269687  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:44.295403  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.295423  346554 cri.go:89] found id: ""
	I1002 07:22:44.295431  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:44.295490  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.299440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:44.299555  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:44.333034  346554 cri.go:89] found id: ""
	I1002 07:22:44.333110  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.333136  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:44.333175  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:44.333210  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.364108  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:44.364139  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:44.433101  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:44.433123  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:44.433137  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.489676  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:44.489711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.535780  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:44.535819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.563832  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:44.563862  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:44.644267  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:44.644308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:44.678038  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:44.678077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:44.779429  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:44.779467  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:44.802305  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:44.802335  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.828371  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:44.828400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.412789  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:47.423373  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:47.423464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:47.451136  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.451162  346554 cri.go:89] found id: ""
	I1002 07:22:47.451171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:47.451237  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.455412  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:47.455531  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:47.487387  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:47.487418  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:47.487424  346554 cri.go:89] found id: ""
	I1002 07:22:47.487432  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:47.487491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.491360  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.495265  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:47.495336  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:47.534120  346554 cri.go:89] found id: ""
	I1002 07:22:47.534144  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.534153  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:47.534159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:47.534223  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:47.567581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.567604  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:47.567610  346554 cri.go:89] found id: ""
	I1002 07:22:47.567618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:47.567676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.571558  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.575428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:47.575500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:47.604017  346554 cri.go:89] found id: ""
	I1002 07:22:47.604041  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.604050  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:47.604057  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:47.604178  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:47.631246  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.631266  346554 cri.go:89] found id: ""
	I1002 07:22:47.631275  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:47.631336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.635224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:47.635329  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:47.662879  346554 cri.go:89] found id: ""
	I1002 07:22:47.662906  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.662916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:47.662925  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:47.662969  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:47.758850  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:47.758889  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.787003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:47.787035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.865561  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:47.865598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.894009  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:47.894083  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:47.911472  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:47.911547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:47.992995  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:47.993061  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:47.993095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:48.054795  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:48.054833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:48.105647  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:48.105681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:48.136822  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:48.136852  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:48.221826  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:48.221868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:50.759146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:50.770232  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:50.770304  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:50.808978  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:50.808999  346554 cri.go:89] found id: ""
	I1002 07:22:50.809014  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:50.809071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.812891  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:50.812973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:50.844548  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:50.844621  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:50.844634  346554 cri.go:89] found id: ""
	I1002 07:22:50.844643  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:50.844704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.848854  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.853318  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:50.853395  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:50.879864  346554 cri.go:89] found id: ""
	I1002 07:22:50.879885  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.879894  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:50.879901  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:50.879978  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:50.913482  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:50.913502  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:50.913506  346554 cri.go:89] found id: ""
	I1002 07:22:50.913514  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:50.913571  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.917411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.920913  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:50.920995  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:50.953742  346554 cri.go:89] found id: ""
	I1002 07:22:50.953769  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.953778  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:50.953785  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:50.953849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:50.982216  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:50.982239  346554 cri.go:89] found id: ""
	I1002 07:22:50.982247  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:50.982312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.985960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:50.986036  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:51.023369  346554 cri.go:89] found id: ""
	I1002 07:22:51.023407  346554 logs.go:282] 0 containers: []
	W1002 07:22:51.023416  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:51.023425  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:51.023437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:51.124423  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:51.124471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:51.162362  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:51.162466  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:51.193077  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:51.193120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:51.209317  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:51.209348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:51.286706  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:51.286736  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:51.286768  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:51.314928  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:51.315005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:51.375178  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:51.375216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:51.450324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:51.450368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:51.478495  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:51.478526  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:51.563131  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:51.563178  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.112345  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:54.123567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:54.123643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:54.154215  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.154239  346554 cri.go:89] found id: ""
	I1002 07:22:54.154247  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:54.154306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.158242  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:54.158319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:54.192307  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.192332  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.192343  346554 cri.go:89] found id: ""
	I1002 07:22:54.192351  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:54.192419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.197194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.201582  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:54.201705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:54.228380  346554 cri.go:89] found id: ""
	I1002 07:22:54.228415  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.228425  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:54.228432  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:54.228525  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:54.256056  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.256080  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.256087  346554 cri.go:89] found id: ""
	I1002 07:22:54.256094  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:54.256155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.260143  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.263934  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:54.264008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:54.290214  346554 cri.go:89] found id: ""
	I1002 07:22:54.290241  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.290251  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:54.290256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:54.290314  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:54.319063  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.319117  346554 cri.go:89] found id: ""
	I1002 07:22:54.319126  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:54.319184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.323448  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:54.323547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:54.354341  346554 cri.go:89] found id: ""
	I1002 07:22:54.354366  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.354374  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:54.354384  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:54.354396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.409595  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:54.409633  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.449908  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:54.449944  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.532130  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:54.532170  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.559794  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:54.559822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.593620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:54.593651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:54.700915  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:54.700951  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.727426  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:54.727452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.756226  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:54.756263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:54.841269  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:54.841312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:54.859387  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:54.859425  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:54.940701  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.441672  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:57.453569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:57.453639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:57.483699  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:57.483722  346554 cri.go:89] found id: ""
	I1002 07:22:57.483746  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:57.483845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.487681  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:57.487775  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:57.518495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:57.518520  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:57.518526  346554 cri.go:89] found id: ""
	I1002 07:22:57.518534  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:57.518593  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.522615  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.526448  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:57.526523  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:57.553219  346554 cri.go:89] found id: ""
	I1002 07:22:57.553246  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.553255  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:57.553263  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:57.553327  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:57.582109  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.582132  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.582137  346554 cri.go:89] found id: ""
	I1002 07:22:57.582146  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:57.582209  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.586222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.590675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:57.590752  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:57.621475  346554 cri.go:89] found id: ""
	I1002 07:22:57.621544  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.621567  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:57.621592  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:57.621680  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:57.647238  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:57.647304  346554 cri.go:89] found id: ""
	I1002 07:22:57.647329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:57.647425  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.651299  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:57.651391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:57.681221  346554 cri.go:89] found id: ""
	I1002 07:22:57.681298  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.681324  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:57.681350  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:57.681387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.757042  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:57.757079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.789483  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:57.789519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:57.876258  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:57.876301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:57.909957  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:57.909986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:57.994768  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.994790  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:57.994804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:58.057805  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:58.057845  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:58.093196  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:58.093227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:58.192017  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:58.192055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:58.209558  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:58.209587  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:58.236404  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:58.236433  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.781745  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:00.796477  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:00.796552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:00.823241  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:00.823265  346554 cri.go:89] found id: ""
	I1002 07:23:00.823273  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:00.823327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.827586  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:00.827675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:00.862251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:00.862274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.862280  346554 cri.go:89] found id: ""
	I1002 07:23:00.862287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:00.862348  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.866453  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.870120  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:00.870189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:00.910250  346554 cri.go:89] found id: ""
	I1002 07:23:00.910318  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.910341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:00.910366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:00.910451  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:00.939142  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:00.939208  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:00.939234  346554 cri.go:89] found id: ""
	I1002 07:23:00.939243  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:00.939300  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.943281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.947110  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:00.947180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:00.979402  346554 cri.go:89] found id: ""
	I1002 07:23:00.979431  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.979444  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:00.979452  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:00.979518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:01.016038  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.016103  346554 cri.go:89] found id: ""
	I1002 07:23:01.016131  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:01.016225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:01.020366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:01.020520  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:01.049712  346554 cri.go:89] found id: ""
	I1002 07:23:01.049780  346554 logs.go:282] 0 containers: []
	W1002 07:23:01.049803  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:01.049831  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:01.049870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:01.101253  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:01.101287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:01.200014  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:01.200053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:01.277860  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:01.277885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:01.277898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:01.341507  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:01.341545  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:01.413278  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:01.413313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:01.446875  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:01.446914  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.475436  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:01.475464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:01.551813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:01.551853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:01.585150  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:01.585187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:01.601574  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:01.601606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.131042  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:04.142520  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:04.142634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:04.176669  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.176692  346554 cri.go:89] found id: ""
	I1002 07:23:04.176701  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:04.176763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.180972  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:04.181051  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:04.208821  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.208846  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.208851  346554 cri.go:89] found id: ""
	I1002 07:23:04.208859  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:04.208925  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.213191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.217006  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:04.217129  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:04.245751  346554 cri.go:89] found id: ""
	I1002 07:23:04.245775  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.245790  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:04.245798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:04.245859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:04.284664  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.284685  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.284689  346554 cri.go:89] found id: ""
	I1002 07:23:04.284697  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:04.284756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.288986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.292617  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:04.292700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:04.320145  346554 cri.go:89] found id: ""
	I1002 07:23:04.320171  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.320180  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:04.320187  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:04.320245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:04.347600  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.347622  346554 cri.go:89] found id: ""
	I1002 07:23:04.347631  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:04.347686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.351440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:04.351511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:04.383653  346554 cri.go:89] found id: ""
	I1002 07:23:04.383732  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.383749  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:04.383759  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:04.383775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.440177  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:04.440218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.468956  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:04.469027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:04.545741  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:04.545780  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:04.579865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:04.579895  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:04.681656  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:04.681695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:04.752352  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:04.752373  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:04.752387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.793420  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:04.793493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.864258  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:04.864293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.893921  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:04.894006  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:04.911663  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:04.911693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.444239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:07.455140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:07.455218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:07.484101  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.484124  346554 cri.go:89] found id: ""
	I1002 07:23:07.484133  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:07.484189  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.488067  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:07.488145  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:07.522958  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.523021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.523044  346554 cri.go:89] found id: ""
	I1002 07:23:07.523071  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:07.523194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.527249  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.531022  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:07.531124  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:07.557498  346554 cri.go:89] found id: ""
	I1002 07:23:07.557519  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.557528  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:07.557535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:07.557609  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:07.584061  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.584092  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.584096  346554 cri.go:89] found id: ""
	I1002 07:23:07.584105  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:07.584170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.587957  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.591564  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:07.591639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:07.619944  346554 cri.go:89] found id: ""
	I1002 07:23:07.619971  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.619980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:07.619987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:07.620050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:07.648834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:07.648855  346554 cri.go:89] found id: ""
	I1002 07:23:07.648863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:07.648919  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.652819  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:07.652937  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:07.682396  346554 cri.go:89] found id: ""
	I1002 07:23:07.682421  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.682430  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:07.682439  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:07.682452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:07.751625  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:07.751650  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:07.751667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.778524  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:07.778551  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.850872  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:07.850910  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.887246  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:07.887283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.959701  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:07.959738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.989632  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:07.989661  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:08.009848  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:08.009885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:08.041024  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:08.041052  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:08.120762  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:08.120798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:08.174204  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:08.174234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:10.791227  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:10.804748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:10.804834  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:10.833209  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:10.833256  346554 cri.go:89] found id: ""
	I1002 07:23:10.833264  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:10.833327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.837233  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:10.837307  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:10.867407  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:10.867431  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:10.867436  346554 cri.go:89] found id: ""
	I1002 07:23:10.867444  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:10.867501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.871289  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.874962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:10.875041  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:10.909346  346554 cri.go:89] found id: ""
	I1002 07:23:10.909372  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.909381  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:10.909388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:10.909444  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:10.944052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:10.944127  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:10.944152  346554 cri.go:89] found id: ""
	I1002 07:23:10.944181  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:10.944285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.952530  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.957003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:10.957085  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:10.984253  346554 cri.go:89] found id: ""
	I1002 07:23:10.984287  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.984297  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:10.984321  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:10.984401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:11.018350  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.018417  346554 cri.go:89] found id: ""
	I1002 07:23:11.018442  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:11.018520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:11.022612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:11.022707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:11.054294  346554 cri.go:89] found id: ""
	I1002 07:23:11.054371  346554 logs.go:282] 0 containers: []
	W1002 07:23:11.054394  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:11.054437  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:11.054471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:11.132821  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:11.132846  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:11.132859  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:11.161373  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:11.161401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:11.219899  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:11.219936  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.250524  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:11.250554  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:11.282533  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:11.282564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:11.385870  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:11.385909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:11.402968  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:11.402997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:11.447948  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:11.447983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:11.521218  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:11.521256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:11.551246  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:11.551320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.129146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:14.140212  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:14.140315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:14.167561  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:14.167585  346554 cri.go:89] found id: ""
	I1002 07:23:14.167593  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:14.167691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.171728  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:14.171841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:14.198571  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.198594  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.198600  346554 cri.go:89] found id: ""
	I1002 07:23:14.198607  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:14.198693  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.202658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.207962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:14.208057  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:14.233944  346554 cri.go:89] found id: ""
	I1002 07:23:14.233970  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.233979  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:14.233986  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:14.234064  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:14.264854  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.264878  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.264884  346554 cri.go:89] found id: ""
	I1002 07:23:14.264892  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:14.264948  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.268797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.272677  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:14.272756  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:14.304992  346554 cri.go:89] found id: ""
	I1002 07:23:14.305031  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.305041  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:14.305047  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:14.305120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:14.335500  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.335570  346554 cri.go:89] found id: ""
	I1002 07:23:14.335593  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:14.335684  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.339428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:14.339502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:14.366928  346554 cri.go:89] found id: ""
	I1002 07:23:14.366954  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.366964  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:14.366973  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:14.366984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.441765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:14.441808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.473510  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:14.473541  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.552162  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:14.552201  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:14.586130  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:14.586160  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:14.602135  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:14.602164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.638523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:14.638557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.717772  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:14.717808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.748211  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:14.748283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:14.848964  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:14.849003  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:14.926254  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:14.926277  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:14.926290  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.456912  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:17.467889  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:17.467979  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:17.495434  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.495457  346554 cri.go:89] found id: ""
	I1002 07:23:17.495466  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:17.495524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.499591  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:17.499663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:17.535737  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.535757  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:17.535761  346554 cri.go:89] found id: ""
	I1002 07:23:17.535768  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:17.535826  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.540069  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.543817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:17.543891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:17.573877  346554 cri.go:89] found id: ""
	I1002 07:23:17.573907  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.573917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:17.573923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:17.573989  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:17.609297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.609320  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:17.609326  346554 cri.go:89] found id: ""
	I1002 07:23:17.609333  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:17.609390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.613640  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.617183  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:17.617253  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:17.647944  346554 cri.go:89] found id: ""
	I1002 07:23:17.647971  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.647980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:17.647987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:17.648045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:17.674528  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:17.674552  346554 cri.go:89] found id: ""
	I1002 07:23:17.674561  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:17.674617  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.678979  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:17.679143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:17.706803  346554 cri.go:89] found id: ""
	I1002 07:23:17.706828  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.706837  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:17.706846  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:17.706857  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:17.801171  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:17.801207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:17.817922  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:17.817952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.889064  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:17.889103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.971481  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:17.971518  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:18.051668  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:18.051712  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:18.090695  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:18.090723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:18.162304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:18.162328  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:18.162343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:18.194200  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:18.194233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:18.231522  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:18.231557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:18.263215  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:18.263246  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:20.795234  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:20.807871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:20.807939  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:20.839049  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:20.839070  346554 cri.go:89] found id: ""
	I1002 07:23:20.839098  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:20.839172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.842946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:20.843023  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:20.873446  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:20.873469  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:20.873475  346554 cri.go:89] found id: ""
	I1002 07:23:20.873484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:20.873540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.877435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.881337  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:20.881415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:20.918940  346554 cri.go:89] found id: ""
	I1002 07:23:20.918971  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.918980  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:20.918987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:20.919046  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:20.951052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:20.951075  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:20.951112  346554 cri.go:89] found id: ""
	I1002 07:23:20.951120  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:20.951185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.955805  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.959649  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:20.959737  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:20.987685  346554 cri.go:89] found id: ""
	I1002 07:23:20.987710  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.987719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:20.987726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:20.987792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:21.028577  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.028602  346554 cri.go:89] found id: ""
	I1002 07:23:21.028622  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:21.028683  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:21.032899  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:21.032977  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:21.062654  346554 cri.go:89] found id: ""
	I1002 07:23:21.062679  346554 logs.go:282] 0 containers: []
	W1002 07:23:21.062688  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:21.062698  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:21.062710  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:21.091027  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:21.091059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:21.159267  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:21.159307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:21.231814  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:21.231856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:21.263174  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:21.263205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:21.310161  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:21.310194  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:21.349961  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:21.349997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.379224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:21.379306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:21.454682  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:21.454722  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:21.560920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:21.560960  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:21.578179  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:21.578211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:21.668218  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.169201  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:24.181390  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:24.181463  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:24.213873  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:24.213896  346554 cri.go:89] found id: ""
	I1002 07:23:24.213905  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:24.213963  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.217730  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:24.217807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:24.252439  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.252471  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.252476  346554 cri.go:89] found id: ""
	I1002 07:23:24.252484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:24.252567  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.256307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.260273  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:24.260349  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:24.287826  346554 cri.go:89] found id: ""
	I1002 07:23:24.287852  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.287862  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:24.287870  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:24.287973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:24.315859  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.315884  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.315890  346554 cri.go:89] found id: ""
	I1002 07:23:24.315897  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:24.315975  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.319993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.323777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:24.323877  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:24.354601  346554 cri.go:89] found id: ""
	I1002 07:23:24.354631  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.354642  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:24.354648  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:24.354730  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:24.384370  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.384395  346554 cri.go:89] found id: ""
	I1002 07:23:24.384403  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:24.384488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.388615  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:24.388695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:24.415488  346554 cri.go:89] found id: ""
	I1002 07:23:24.415514  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.415523  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:24.415533  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:24.415546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.458158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:24.458192  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.534624  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:24.534667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.567982  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:24.568016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.596275  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:24.596306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:24.674293  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:24.674334  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:24.777997  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:24.778039  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:24.801006  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:24.801036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.862265  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:24.862303  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:24.913721  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:24.913755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:24.991414  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.991443  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:24.991458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.525665  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:27.536783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:27.536869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:27.563440  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.563507  346554 cri.go:89] found id: ""
	I1002 07:23:27.563531  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:27.563623  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.568154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:27.568278  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:27.597184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:27.597205  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:27.597211  346554 cri.go:89] found id: ""
	I1002 07:23:27.597230  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:27.597306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.601073  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.604808  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:27.604880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:27.635124  346554 cri.go:89] found id: ""
	I1002 07:23:27.635147  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.635155  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:27.635161  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:27.635220  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:27.662383  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:27.662455  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:27.662474  346554 cri.go:89] found id: ""
	I1002 07:23:27.662500  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:27.662607  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.666537  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.670164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:27.670238  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:27.697001  346554 cri.go:89] found id: ""
	I1002 07:23:27.697028  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.697037  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:27.697044  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:27.697127  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:27.722638  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:27.722662  346554 cri.go:89] found id: ""
	I1002 07:23:27.722672  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:27.722728  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.726512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:27.726591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:27.755270  346554 cri.go:89] found id: ""
	I1002 07:23:27.755300  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.755309  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:27.755319  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:27.755330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:27.854338  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:27.854379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:27.928550  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:27.928577  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:27.928590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.960015  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:27.960047  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:28.025647  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:28.025706  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:28.064089  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:28.064125  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:28.158385  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:28.158423  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:28.196505  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:28.196533  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:28.215893  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:28.215921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:28.246774  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:28.246821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:28.274010  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:28.274036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:30.852724  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:30.863588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:30.863660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:30.891349  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:30.891371  346554 cri.go:89] found id: ""
	I1002 07:23:30.891380  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:30.891457  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.895249  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:30.895343  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:30.922333  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:30.922356  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:30.922361  346554 cri.go:89] found id: ""
	I1002 07:23:30.922368  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:30.922423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.926269  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.929885  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:30.929957  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:30.956216  346554 cri.go:89] found id: ""
	I1002 07:23:30.956253  346554 logs.go:282] 0 containers: []
	W1002 07:23:30.956269  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:30.956285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:30.956347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:30.984076  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:30.984101  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:30.984107  346554 cri.go:89] found id: ""
	I1002 07:23:30.984121  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:30.984182  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.988082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.991650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:30.991741  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:31.028148  346554 cri.go:89] found id: ""
	I1002 07:23:31.028174  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.028184  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:31.028190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:31.028274  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:31.057090  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.057116  346554 cri.go:89] found id: ""
	I1002 07:23:31.057125  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:31.057195  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:31.064614  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:31.064695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:31.096928  346554 cri.go:89] found id: ""
	I1002 07:23:31.096996  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.097022  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:31.097042  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:31.097069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:31.155662  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:31.155701  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:31.202926  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:31.202958  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:31.236483  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:31.236508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:31.341179  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:31.341216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:31.368996  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:31.369022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:31.449499  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:31.449539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.476326  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:31.476354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:31.561871  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:31.561909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:31.597214  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:31.597243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:31.614646  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:31.614674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:31.686141  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.187051  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:34.198084  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:34.198163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:34.225977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.226000  346554 cri.go:89] found id: ""
	I1002 07:23:34.226009  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:34.226094  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.230977  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:34.231053  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:34.258817  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.258840  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:34.258845  346554 cri.go:89] found id: ""
	I1002 07:23:34.258853  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:34.258908  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.262894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.266671  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:34.266772  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:34.296183  346554 cri.go:89] found id: ""
	I1002 07:23:34.296207  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.296217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:34.296223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:34.296283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:34.329604  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.329678  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.329698  346554 cri.go:89] found id: ""
	I1002 07:23:34.329722  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:34.329830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.333641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.337102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:34.337170  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:34.365600  346554 cri.go:89] found id: ""
	I1002 07:23:34.365626  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.365636  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:34.365645  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:34.365708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:34.393323  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.393347  346554 cri.go:89] found id: ""
	I1002 07:23:34.393357  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:34.393439  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.397338  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:34.397411  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:34.423876  346554 cri.go:89] found id: ""
	I1002 07:23:34.423899  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.423908  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:34.423918  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:34.423934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.453221  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:34.453251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.481067  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:34.481095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:34.558614  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:34.558651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:34.601917  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:34.601948  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:34.705602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:34.705637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:34.769442  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.769466  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:34.769478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.808589  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:34.808615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.869982  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:34.870024  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.959694  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:34.959739  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:34.976284  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:34.976319  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.518488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:37.530159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:37.530242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:37.557004  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:37.557026  346554 cri.go:89] found id: ""
	I1002 07:23:37.557035  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:37.557091  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.560903  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:37.560976  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:37.593556  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.593580  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.593586  346554 cri.go:89] found id: ""
	I1002 07:23:37.593594  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:37.593652  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.597692  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.601598  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:37.601672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:37.628723  346554 cri.go:89] found id: ""
	I1002 07:23:37.628751  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.628761  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:37.628767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:37.628832  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:37.656989  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:37.657010  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:37.657014  346554 cri.go:89] found id: ""
	I1002 07:23:37.657022  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:37.657090  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.660940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.664730  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:37.664810  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:37.690545  346554 cri.go:89] found id: ""
	I1002 07:23:37.690567  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.690575  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:37.690582  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:37.690638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:37.718139  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:37.718164  346554 cri.go:89] found id: ""
	I1002 07:23:37.718173  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:37.718239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.722013  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:37.722130  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:37.748320  346554 cri.go:89] found id: ""
	I1002 07:23:37.748387  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.748410  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:37.748439  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:37.748478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:37.848896  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:37.848937  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:37.935000  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:37.935035  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:37.935050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.998904  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:37.998949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:38.039239  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:38.039274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:38.133839  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:38.133878  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:38.164590  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:38.164617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:38.247363  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:38.247401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:38.263025  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:38.263053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:38.292185  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:38.292215  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:38.324631  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:38.324662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:40.856053  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:40.866969  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:40.867037  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:40.908779  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:40.908802  346554 cri.go:89] found id: ""
	I1002 07:23:40.908811  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:40.908882  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.912652  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:40.912724  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:40.938681  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:40.938711  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:40.938717  346554 cri.go:89] found id: ""
	I1002 07:23:40.938725  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:40.938780  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.942512  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.945790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:40.945860  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:40.973961  346554 cri.go:89] found id: ""
	I1002 07:23:40.974043  346554 logs.go:282] 0 containers: []
	W1002 07:23:40.974067  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:40.974093  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:40.974208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:41.001128  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.001152  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.001158  346554 cri.go:89] found id: ""
	I1002 07:23:41.001165  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:41.001239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.007592  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.012525  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:41.012642  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:41.044447  346554 cri.go:89] found id: ""
	I1002 07:23:41.044521  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.044545  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:41.044571  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:41.044654  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:41.083149  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.083216  346554 cri.go:89] found id: ""
	I1002 07:23:41.083250  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:41.083338  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.087534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:41.087663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:41.118406  346554 cri.go:89] found id: ""
	I1002 07:23:41.118470  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.118494  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:41.118528  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:41.118559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.195975  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:41.196011  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.227140  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:41.227172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:41.313141  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:41.313180  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:41.416180  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:41.416218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:41.459495  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:41.459536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.488753  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:41.488785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:41.532527  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:41.532560  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:41.548856  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:41.548885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:41.618600  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:41.618624  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:41.618638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:41.646628  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:41.646656  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.221221  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:44.231877  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:44.231950  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:44.257682  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.257714  346554 cri.go:89] found id: ""
	I1002 07:23:44.257724  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:44.257781  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.261470  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:44.261568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:44.291709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.291732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.291738  346554 cri.go:89] found id: ""
	I1002 07:23:44.291749  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:44.291806  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.295774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.299744  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:44.299891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:44.326325  346554 cri.go:89] found id: ""
	I1002 07:23:44.326361  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.326372  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:44.326396  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:44.326476  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:44.353658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.353682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.353687  346554 cri.go:89] found id: ""
	I1002 07:23:44.353694  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:44.353752  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.357660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.361374  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:44.361448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:44.390237  346554 cri.go:89] found id: ""
	I1002 07:23:44.390271  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.390281  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:44.390287  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:44.390356  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:44.421420  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.421444  346554 cri.go:89] found id: ""
	I1002 07:23:44.421453  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:44.421520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.425406  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:44.425480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:44.453498  346554 cri.go:89] found id: ""
	I1002 07:23:44.453575  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.453599  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:44.453627  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:44.453663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:44.469406  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:44.469489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:44.537881  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:44.537947  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:44.537976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.566669  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:44.566750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.626234  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:44.626311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.663981  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:44.664015  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.743176  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:44.743211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.769609  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:44.769637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:44.850618  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:44.850654  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:44.956047  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:44.956089  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.988388  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:44.988421  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:47.617924  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:47.629050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:47.629142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:47.657724  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:47.657747  346554 cri.go:89] found id: ""
	I1002 07:23:47.657756  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:47.657814  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.661805  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:47.661878  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:47.691884  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:47.691906  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:47.691911  346554 cri.go:89] found id: ""
	I1002 07:23:47.691919  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:47.691978  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.695983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.699611  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:47.699685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:47.731628  346554 cri.go:89] found id: ""
	I1002 07:23:47.731654  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.731664  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:47.731671  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:47.731732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:47.760694  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:47.760718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:47.760723  346554 cri.go:89] found id: ""
	I1002 07:23:47.760731  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:47.760830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.764776  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.768282  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:47.768363  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:47.800941  346554 cri.go:89] found id: ""
	I1002 07:23:47.800967  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.800976  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:47.800982  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:47.801049  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:47.828847  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.828870  346554 cri.go:89] found id: ""
	I1002 07:23:47.828879  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:47.828955  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.832777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:47.832850  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:47.861095  346554 cri.go:89] found id: ""
	I1002 07:23:47.861122  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.861131  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:47.861141  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:47.861184  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.893617  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:47.893649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:47.990939  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:47.990977  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:48.007073  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:48.007153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:48.043757  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:48.043786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:48.136713  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:48.136750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:48.168119  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:48.168151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:48.251880  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:48.251919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:48.285530  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:48.285566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:48.357500  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:48.357522  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:48.357537  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:48.403215  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:48.403293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.006650  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:51.028354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:51.028471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:51.057229  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.057253  346554 cri.go:89] found id: ""
	I1002 07:23:51.057262  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:51.057329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.061731  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:51.061807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:51.089750  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.089772  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.089778  346554 cri.go:89] found id: ""
	I1002 07:23:51.089785  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:51.089848  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.094055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.097989  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:51.098090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:51.125460  346554 cri.go:89] found id: ""
	I1002 07:23:51.125487  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.125510  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:51.125536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:51.125611  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:51.155658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.155684  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.155689  346554 cri.go:89] found id: ""
	I1002 07:23:51.155698  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:51.155757  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.159937  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.164562  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:51.164639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:51.194590  346554 cri.go:89] found id: ""
	I1002 07:23:51.194626  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.194635  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:51.194642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:51.194720  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:51.230400  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.230424  346554 cri.go:89] found id: ""
	I1002 07:23:51.230433  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:51.230501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.235241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:51.235335  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:51.264526  346554 cri.go:89] found id: ""
	I1002 07:23:51.264551  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.264562  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:51.264573  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:51.264603  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.292045  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:51.292128  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.377066  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:51.377104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.408242  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:51.408273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.437071  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:51.437100  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:51.508699  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:51.508723  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:51.508736  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.594052  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:51.594094  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.631968  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:51.632002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:51.710908  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:51.710950  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:51.751275  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:51.751309  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:51.859428  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:51.859510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.376917  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:54.388247  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:54.388322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:54.417539  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.417563  346554 cri.go:89] found id: ""
	I1002 07:23:54.417571  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:54.417634  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.421536  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:54.421612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:54.452318  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.452342  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:54.452347  346554 cri.go:89] found id: ""
	I1002 07:23:54.452355  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:54.452410  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.457434  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.460992  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:54.461070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:54.494010  346554 cri.go:89] found id: ""
	I1002 07:23:54.494031  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.494040  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:54.494045  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:54.494107  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:54.528280  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:54.528300  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.528305  346554 cri.go:89] found id: ""
	I1002 07:23:54.528312  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:54.528369  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.532283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.535876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:54.535946  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:54.564214  346554 cri.go:89] found id: ""
	I1002 07:23:54.564240  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.564250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:54.564256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:54.564347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:54.594060  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:54.594084  346554 cri.go:89] found id: ""
	I1002 07:23:54.594093  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:54.594169  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.598344  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:54.598442  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:54.632402  346554 cri.go:89] found id: ""
	I1002 07:23:54.632426  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.632435  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:54.632445  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:54.632500  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:54.729477  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:54.729517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:54.800743  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:54.800815  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:54.800846  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.861032  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:54.861069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.889171  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:54.889244  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:54.925585  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:54.925615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.941174  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:54.941202  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.969205  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:54.969235  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:55.020047  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:55.020087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:55.098725  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:55.098805  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:55.132210  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:55.132239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:57.716428  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:57.730713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:57.730787  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:57.757853  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:57.757878  346554 cri.go:89] found id: ""
	I1002 07:23:57.757887  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:57.757943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.761971  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:57.762045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:57.790866  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:57.790891  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.790897  346554 cri.go:89] found id: ""
	I1002 07:23:57.790904  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:57.790962  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.795621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.799575  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:57.799653  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:57.830281  346554 cri.go:89] found id: ""
	I1002 07:23:57.830307  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.830317  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:57.830323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:57.830382  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:57.858397  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:57.858420  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:57.858425  346554 cri.go:89] found id: ""
	I1002 07:23:57.858433  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:57.858488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.862244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.865851  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:57.865951  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:57.893160  346554 cri.go:89] found id: ""
	I1002 07:23:57.893234  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.893250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:57.893258  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:57.893318  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:57.920413  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:57.920499  346554 cri.go:89] found id: ""
	I1002 07:23:57.920516  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:57.920585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.924327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:57.924423  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:57.951174  346554 cri.go:89] found id: ""
	I1002 07:23:57.951197  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.951206  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:57.951216  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:57.951268  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.986550  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:57.986632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:58.017224  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:58.017260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:58.122339  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:58.122377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:58.138465  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:58.138494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:58.168292  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:58.168317  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:58.230852  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:58.230890  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:58.328715  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:58.328764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:58.357761  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:58.357792  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:58.444436  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:58.444482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:58.478280  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:58.478306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:58.560395  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.061663  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:01.077726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:01.077804  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:01.106834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:01.106860  346554 cri.go:89] found id: ""
	I1002 07:24:01.106869  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:01.106940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.110940  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:01.111014  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:01.139370  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.139392  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.139397  346554 cri.go:89] found id: ""
	I1002 07:24:01.139404  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:01.139466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.143857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.148114  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:01.148207  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:01.178376  346554 cri.go:89] found id: ""
	I1002 07:24:01.178468  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.178493  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:01.178522  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:01.178635  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:01.208075  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.208098  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.208103  346554 cri.go:89] found id: ""
	I1002 07:24:01.208111  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:01.208178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.212014  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.216098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:01.216233  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:01.245384  346554 cri.go:89] found id: ""
	I1002 07:24:01.245424  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.245434  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:01.245440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:01.245503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:01.282247  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.282322  346554 cri.go:89] found id: ""
	I1002 07:24:01.282346  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:01.282443  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.288826  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:01.288905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:01.319901  346554 cri.go:89] found id: ""
	I1002 07:24:01.319926  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.319934  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:01.319943  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:01.319956  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.389606  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:01.389692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.444021  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:01.444055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.526762  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:01.526804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.559019  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:01.559049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:01.634782  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:01.634818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:01.709026  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.709100  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:01.709120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.738970  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:01.739000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:01.770329  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:01.770364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:01.884154  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:01.884232  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:01.902364  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:01.902390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.435943  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:04.447669  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:04.447785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:04.478942  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.478965  346554 cri.go:89] found id: ""
	I1002 07:24:04.478974  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:04.479030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.483417  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:04.483511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:04.518294  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:04.518320  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.518325  346554 cri.go:89] found id: ""
	I1002 07:24:04.518334  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:04.518388  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.522223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.526427  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:04.526558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:04.558950  346554 cri.go:89] found id: ""
	I1002 07:24:04.558987  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.558996  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:04.559003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:04.559153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:04.586620  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:04.586645  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:04.586650  346554 cri.go:89] found id: ""
	I1002 07:24:04.586658  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:04.586737  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.590676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.594540  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:04.594644  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:04.621686  346554 cri.go:89] found id: ""
	I1002 07:24:04.621709  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.621719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:04.621725  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:04.621781  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:04.649834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:04.649855  346554 cri.go:89] found id: ""
	I1002 07:24:04.649863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:04.649944  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.654335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:04.654436  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:04.687143  346554 cri.go:89] found id: ""
	I1002 07:24:04.687166  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.687175  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:04.687184  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:04.687216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.715298  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:04.715329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.758402  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:04.758436  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:04.838751  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:04.838789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:04.870372  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:04.870403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:04.984168  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:04.984207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:04.999826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:04.999858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:05.088672  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:05.088696  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:05.088709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:05.150024  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:05.150063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:05.226780  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:05.226819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:05.255567  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:05.255605  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.791197  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:07.803594  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:07.803689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:07.833077  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:07.833103  346554 cri.go:89] found id: ""
	I1002 07:24:07.833113  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:07.833214  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.837537  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:07.837661  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:07.866899  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:07.866926  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:07.866932  346554 cri.go:89] found id: ""
	I1002 07:24:07.866939  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:07.867000  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.870759  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.874593  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:07.874713  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:07.903524  346554 cri.go:89] found id: ""
	I1002 07:24:07.903587  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.903620  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:07.903644  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:07.903738  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:07.934472  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:07.934547  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:07.934567  346554 cri.go:89] found id: ""
	I1002 07:24:07.934593  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:07.934688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.938660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.942349  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:07.942453  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:07.969924  346554 cri.go:89] found id: ""
	I1002 07:24:07.969947  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.969956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:07.969964  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:07.970022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:07.998801  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.998826  346554 cri.go:89] found id: ""
	I1002 07:24:07.998834  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:07.998890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:08.006051  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:08.006218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:08.043683  346554 cri.go:89] found id: ""
	I1002 07:24:08.043712  346554 logs.go:282] 0 containers: []
	W1002 07:24:08.043723  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:08.043733  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:08.043746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:08.094506  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:08.094546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:08.175873  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:08.175912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:08.208161  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:08.208191  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:08.234954  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:08.234983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:08.301287  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:08.301325  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:08.377087  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:08.377123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:08.405378  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:08.405407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:08.431355  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:08.431386  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:08.536433  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:08.536479  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:08.553542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:08.553575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:08.621305  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.122975  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:11.135150  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:11.135231  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:11.168608  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.168633  346554 cri.go:89] found id: ""
	I1002 07:24:11.168642  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:11.168704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.172810  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:11.172893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:11.204325  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.204401  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.204413  346554 cri.go:89] found id: ""
	I1002 07:24:11.204422  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:11.204491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.208514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.212208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:11.212287  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:11.245698  346554 cri.go:89] found id: ""
	I1002 07:24:11.245725  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.245736  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:11.245743  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:11.245805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:11.274196  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.274219  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:11.274224  346554 cri.go:89] found id: ""
	I1002 07:24:11.274231  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:11.274292  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.278411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.282735  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:11.282813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:11.322108  346554 cri.go:89] found id: ""
	I1002 07:24:11.322129  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.322138  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:11.322144  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:11.322203  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:11.350582  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.350647  346554 cri.go:89] found id: ""
	I1002 07:24:11.350659  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:11.350715  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.354559  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:11.354628  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:11.386834  346554 cri.go:89] found id: ""
	I1002 07:24:11.386899  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.386923  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:11.386951  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:11.386981  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:11.465595  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:11.465632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.541894  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:11.541933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.619365  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:11.619408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.647305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:11.647336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:11.686923  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:11.686952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:11.792344  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:11.792440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:11.814593  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:11.814623  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:11.895211  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.895236  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:11.895250  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.921556  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:11.921586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.957833  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:11.957872  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.490490  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:14.502377  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:14.502482  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:14.534162  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.534185  346554 cri.go:89] found id: ""
	I1002 07:24:14.534205  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:14.534262  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.538631  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:14.538701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:14.568427  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.568450  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.568456  346554 cri.go:89] found id: ""
	I1002 07:24:14.568463  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:14.568527  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.572917  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.576683  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:14.576760  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:14.604778  346554 cri.go:89] found id: ""
	I1002 07:24:14.604809  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.604819  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:14.604825  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:14.604932  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:14.631788  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:14.631812  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.631817  346554 cri.go:89] found id: ""
	I1002 07:24:14.631824  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:14.631887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.635951  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.639653  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:14.639769  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:14.682797  346554 cri.go:89] found id: ""
	I1002 07:24:14.682823  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.682832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:14.682839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:14.682899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:14.722146  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:14.722175  346554 cri.go:89] found id: ""
	I1002 07:24:14.722183  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:14.722239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.727035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:14.727164  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:14.759413  346554 cri.go:89] found id: ""
	I1002 07:24:14.759438  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.759447  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:14.759458  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:14.759470  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.786929  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:14.787000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.853005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:14.853042  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.899040  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:14.899071  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:15.004708  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:15.004742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:15.123051  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:15.123106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:15.154325  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:15.154357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:15.183161  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:15.183248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:15.265975  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:15.266013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:15.299575  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:15.299607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:15.315427  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:15.315454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:15.394115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:17.895569  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:17.909876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:17.909985  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:17.941059  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:17.941083  346554 cri.go:89] found id: ""
	I1002 07:24:17.941092  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:17.941159  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.945318  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:17.945401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:17.973722  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:17.973743  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:17.973747  346554 cri.go:89] found id: ""
	I1002 07:24:17.973755  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:17.973813  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.978340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.983135  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:17.983214  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:18.024398  346554 cri.go:89] found id: ""
	I1002 07:24:18.024424  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.024433  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:18.024440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:18.024518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:18.053513  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.053535  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.053540  346554 cri.go:89] found id: ""
	I1002 07:24:18.053548  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:18.053631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.057706  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.061744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:18.061820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:18.093847  346554 cri.go:89] found id: ""
	I1002 07:24:18.093873  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.093884  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:18.093891  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:18.093956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:18.123256  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.123283  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.123289  346554 cri.go:89] found id: ""
	I1002 07:24:18.123296  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:18.123355  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.127263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.131206  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:18.131284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:18.157688  346554 cri.go:89] found id: ""
	I1002 07:24:18.157714  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.157724  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:18.157733  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:18.157745  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:18.203920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:18.203946  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:18.220036  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:18.220064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:18.288859  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:18.288885  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:18.288898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:18.326029  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:18.326064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.410880  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:18.410919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:18.516955  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:18.516994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:18.548753  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:18.548786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:18.613812  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:18.613849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.643416  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:18.643444  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.670170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:18.670199  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.699194  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:18.699231  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.274356  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:21.285713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:21.285785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:21.312389  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.312413  346554 cri.go:89] found id: ""
	I1002 07:24:21.312427  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:21.312492  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.316212  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:21.316290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:21.341368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:21.341390  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.341396  346554 cri.go:89] found id: ""
	I1002 07:24:21.341403  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:21.341458  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.345157  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.348764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:21.348841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:21.381263  346554 cri.go:89] found id: ""
	I1002 07:24:21.381292  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.381302  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:21.381308  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:21.381366  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:21.412001  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:21.412022  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.412027  346554 cri.go:89] found id: ""
	I1002 07:24:21.412035  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:21.412092  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.415991  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.419745  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:21.419818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:21.448790  346554 cri.go:89] found id: ""
	I1002 07:24:21.448817  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.448826  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:21.448832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:21.448894  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:21.476863  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.476885  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.476890  346554 cri.go:89] found id: ""
	I1002 07:24:21.476897  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:21.476995  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.481180  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.484939  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:21.485015  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:21.518979  346554 cri.go:89] found id: ""
	I1002 07:24:21.519005  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.519014  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:21.519023  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:21.519035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.548837  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:21.548868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.577649  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:21.577678  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.614505  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:21.614538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.648602  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:21.648630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.730478  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:21.730515  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:21.770385  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:21.770420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:21.869953  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:21.869990  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:21.890825  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:21.890864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:21.963492  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:21.963514  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:21.963531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.990531  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:21.990559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:22.069923  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:22.070005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.652448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:24.663850  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:24.663928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:24.691270  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:24.691349  346554 cri.go:89] found id: ""
	I1002 07:24:24.691385  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:24.691483  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.695776  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:24.695846  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:24.722540  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:24.722563  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:24.722568  346554 cri.go:89] found id: ""
	I1002 07:24:24.722575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:24.722641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.726529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.730111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:24.730184  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:24.760973  346554 cri.go:89] found id: ""
	I1002 07:24:24.760999  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.761009  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:24.761015  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:24.761096  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:24.788682  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.788702  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:24.788707  346554 cri.go:89] found id: ""
	I1002 07:24:24.788714  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:24.788771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.795284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.800831  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:24.800927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:24.826399  346554 cri.go:89] found id: ""
	I1002 07:24:24.826434  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.826443  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:24.826464  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:24.826550  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:24.854301  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:24.854328  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:24.854334  346554 cri.go:89] found id: ""
	I1002 07:24:24.854341  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:24.854423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.858547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.862285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:24.862407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:24.892024  346554 cri.go:89] found id: ""
	I1002 07:24:24.892048  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.892057  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:24.892067  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:24.892079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:24.993633  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:24.993672  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:25.023967  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:25.023999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:25.088069  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:25.088104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:25.171716  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:25.171754  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:25.211296  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:25.211330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:25.277865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:25.277888  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:25.277901  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:25.305336  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:25.305363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:25.339149  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:25.339311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:25.419370  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:25.419407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:25.452415  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:25.452447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:25.482792  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:25.482824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:28.019833  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:28.031976  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:28.032047  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:28.061518  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.061538  346554 cri.go:89] found id: ""
	I1002 07:24:28.061547  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:28.061610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.065737  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:28.065812  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:28.100250  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.100274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.100280  346554 cri.go:89] found id: ""
	I1002 07:24:28.100287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:28.100347  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.104729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.109130  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:28.109242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:28.136194  346554 cri.go:89] found id: ""
	I1002 07:24:28.136220  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.136229  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:28.136235  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:28.136294  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:28.177728  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.177751  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.177756  346554 cri.go:89] found id: ""
	I1002 07:24:28.177764  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:28.177822  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.182057  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.185909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:28.185984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:28.213081  346554 cri.go:89] found id: ""
	I1002 07:24:28.213104  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.213114  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:28.213120  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:28.213180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:28.242037  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.242061  346554 cri.go:89] found id: ""
	I1002 07:24:28.242070  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:28.242125  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.245909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:28.245982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:28.272643  346554 cri.go:89] found id: ""
	I1002 07:24:28.272688  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.272698  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:28.272708  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:28.272741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:28.368590  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:28.368674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:28.441922  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:28.441993  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:28.442025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.485137  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:28.485174  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.519916  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:28.519949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.547334  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:28.547364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:28.578668  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:28.578698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:28.597024  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:28.597053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.625533  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:28.625562  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.703945  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:28.703983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.782221  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:28.782256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.363217  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:31.375576  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:31.375651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:31.412392  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.412416  346554 cri.go:89] found id: ""
	I1002 07:24:31.412425  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:31.412489  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.416397  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:31.416497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:31.447142  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:31.447172  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.447178  346554 cri.go:89] found id: ""
	I1002 07:24:31.447186  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:31.447245  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.451130  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.454872  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:31.454972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:31.491372  346554 cri.go:89] found id: ""
	I1002 07:24:31.491393  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.491401  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:31.491407  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:31.491464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:31.523581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:31.523606  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.523611  346554 cri.go:89] found id: ""
	I1002 07:24:31.523618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:31.523696  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.527714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.531521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:31.531638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:31.557016  346554 cri.go:89] found id: ""
	I1002 07:24:31.557090  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.557110  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:31.557117  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:31.557180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:31.587792  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:31.587815  346554 cri.go:89] found id: ""
	I1002 07:24:31.587824  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:31.587900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.591474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:31.591544  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:31.621938  346554 cri.go:89] found id: ""
	I1002 07:24:31.622002  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.622025  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:31.622057  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:31.622087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.699830  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:31.699940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:31.731270  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:31.731297  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:31.830036  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:31.830073  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:31.849448  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:31.849489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.887973  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:31.888002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.925845  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:31.925879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.955314  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:31.955344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:32.027448  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:32.027527  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:32.027556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:32.097086  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:32.097123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:32.181841  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:32.181877  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:34.710633  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:34.725897  346554 out.go:203] 
	W1002 07:24:34.728826  346554 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1002 07:24:34.728867  346554 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1002 07:24:34.728877  346554 out.go:285] * Related issues:
	* Related issues:
	W1002 07:24:34.728892  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1002 07:24:34.728908  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1002 07:24:34.732168  346554 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-550225 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-550225
helpers_test.go:243: (dbg) docker inspect ha-550225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	        "Created": "2025-10-02T07:02:30.539981852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:16:43.830280649Z",
	            "FinishedAt": "2025-10-02T07:16:42.559270036Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c-json.log",
	        "Name": "/ha-550225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-550225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-550225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	                "LowerDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-550225",
	                "Source": "/var/lib/docker/volumes/ha-550225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-550225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-550225",
	                "name.minikube.sigs.k8s.io": "ha-550225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afa0a4e6ee5917c0a800a9abfad94a173555b01d2438c9506474ee7c27ad6564",
	            "SandboxKey": "/var/run/docker/netns/afa0a4e6ee59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-550225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:f4:60:b8:9c:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a294cab4b5d50d5f227902c62678f378fbede9275f1d54f0b3de7a1f36e1a0",
	                    "EndpointID": "e0227cbf31cf607a461ab665f3bdb5d5d554f27df511a468e38aecbd366c38c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-550225",
	                        "1c1f8ec53310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 logs -n 25: (2.297549963s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt ha-550225-m04:/home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp testdata/cp-test.txt ha-550225-m04:/home/docker/cp-test.txt                                                             │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m04.txt │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m04_ha-550225.txt                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225.txt                                                 │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node start m02 --alsologtostderr -v 5                                                                                      │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:08 UTC │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │ 02 Oct 25 07:08 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5                                                                                   │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ node    │ ha-550225 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │ 02 Oct 25 07:16 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:16:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:16:43.556654  346554 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:16:43.556900  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.556935  346554 out.go:374] Setting ErrFile to fd 2...
	I1002 07:16:43.556957  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.557253  346554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:16:43.557663  346554 out.go:368] Setting JSON to false
	I1002 07:16:43.558546  346554 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7155,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:16:43.558645  346554 start.go:140] virtualization:  
	I1002 07:16:43.562097  346554 out.go:179] * [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:16:43.565995  346554 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:16:43.566065  346554 notify.go:220] Checking for updates...
	I1002 07:16:43.572511  346554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:16:43.575317  346554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:43.578176  346554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:16:43.580964  346554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:16:43.583787  346554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:16:43.587186  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:43.587749  346554 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:16:43.619258  346554 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:16:43.619425  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.676323  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.665454213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.676450  346554 docker.go:318] overlay module found
	I1002 07:16:43.679463  346554 out.go:179] * Using the docker driver based on existing profile
	I1002 07:16:43.682328  346554 start.go:304] selected driver: docker
	I1002 07:16:43.682357  346554 start.go:924] validating driver "docker" against &{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.682550  346554 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:16:43.682661  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.739766  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.730208669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.740206  346554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:16:43.740241  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:43.740306  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:43.740357  346554 start.go:348] cluster config:
	{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.743601  346554 out.go:179] * Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	I1002 07:16:43.746399  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:43.749341  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:43.752288  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:43.752352  346554 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:16:43.752374  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:43.752377  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:43.752484  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:43.752495  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:43.752642  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:43.772750  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:43.772775  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:43.772803  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:43.772827  346554 start.go:360] acquireMachinesLock for ha-550225: {Name:mkc1f009b4f35f6b87d580d72d0a621c44a033f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:43.772899  346554 start.go:364] duration metric: took 46.236µs to acquireMachinesLock for "ha-550225"
	I1002 07:16:43.772922  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:43.772934  346554 fix.go:54] fixHost starting: 
	I1002 07:16:43.773187  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:43.794446  346554 fix.go:112] recreateIfNeeded on ha-550225: state=Stopped err=<nil>
	W1002 07:16:43.794478  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:43.797824  346554 out.go:252] * Restarting existing docker container for "ha-550225" ...
	I1002 07:16:43.797912  346554 cli_runner.go:164] Run: docker start ha-550225
	I1002 07:16:44.052064  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:44.071577  346554 kic.go:430] container "ha-550225" state is running.
	I1002 07:16:44.071977  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:44.097000  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:44.097247  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:44.097316  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:44.119603  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:44.120087  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:44.120103  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:44.120661  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57572->127.0.0.1:33188: read: connection reset by peer
	I1002 07:16:47.250760  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.250786  346554 ubuntu.go:182] provisioning hostname "ha-550225"
	I1002 07:16:47.250888  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.268212  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.268525  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.268543  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225 && echo "ha-550225" | sudo tee /etc/hostname
	I1002 07:16:47.408749  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.408837  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.428229  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.428559  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.428582  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:47.563394  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:47.563422  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:47.563445  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:47.563480  346554 provision.go:84] configureAuth start
	I1002 07:16:47.563555  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:47.583742  346554 provision.go:143] copyHostCerts
	I1002 07:16:47.583804  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583843  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:47.583865  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583942  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:47.584044  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584067  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:47.584076  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584105  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:47.584165  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584188  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:47.584197  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584232  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:47.584294  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225 san=[127.0.0.1 192.168.49.2 ha-550225 localhost minikube]
	I1002 07:16:49.085710  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:49.085804  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:49.085919  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.102600  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.203033  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:49.203111  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:49.220709  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:49.220773  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:16:49.238283  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:49.238380  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:49.255763  346554 provision.go:87] duration metric: took 1.692265184s to configureAuth
	I1002 07:16:49.255832  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:49.256105  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:49.256221  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.273296  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:49.273613  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:49.273636  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:49.545258  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:49.545281  346554 machine.go:96] duration metric: took 5.448016594s to provisionDockerMachine
	I1002 07:16:49.545292  346554 start.go:293] postStartSetup for "ha-550225" (driver="docker")
	I1002 07:16:49.545335  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:49.545400  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:49.545448  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.562765  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.663440  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:49.667012  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:49.667043  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:49.667055  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:49.667131  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:49.667227  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:49.667243  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:49.667356  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:49.675157  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:49.693566  346554 start.go:296] duration metric: took 148.259083ms for postStartSetup
	I1002 07:16:49.693674  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:49.693733  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.711628  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.808263  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:49.813222  346554 fix.go:56] duration metric: took 6.040285845s for fixHost
	I1002 07:16:49.813250  346554 start.go:83] releasing machines lock for "ha-550225", held for 6.040338171s
	I1002 07:16:49.813321  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:49.832086  346554 ssh_runner.go:195] Run: cat /version.json
	I1002 07:16:49.832138  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.832170  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:49.832223  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.860178  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.874339  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.958866  346554 ssh_runner.go:195] Run: systemctl --version
	I1002 07:16:50.049981  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:50.088401  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:50.093782  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:50.093888  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:50.102679  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:50.102707  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:50.102739  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:50.102790  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:50.119025  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:50.132406  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:50.132508  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:50.147702  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:50.161840  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:50.285662  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:16:50.412243  346554 docker.go:234] disabling docker service ...
	I1002 07:16:50.412358  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:16:50.429880  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:16:50.443435  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:16:50.570143  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:16:50.705200  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:16:50.718349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:16:50.732391  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:16:50.732489  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.741688  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:16:50.741842  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.751301  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.760089  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.769286  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:16:50.777484  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.786723  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.795606  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.804393  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:16:50.812287  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:16:50.819774  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:50.940841  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:16:51.084825  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:16:51.084933  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:16:51.088952  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:16:51.089022  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:16:51.093255  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:16:51.121871  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:16:51.122035  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.151306  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.186151  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:16:51.188993  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:16:51.205719  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:16:51.209600  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.219722  346554 kubeadm.go:883] updating cluster {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:16:51.219870  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:51.219932  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.259348  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.259373  346554 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:16:51.259435  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.285823  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.285850  346554 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:16:51.285860  346554 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:16:51.285975  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:16:51.286067  346554 ssh_runner.go:195] Run: crio config
	I1002 07:16:51.349840  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:51.349864  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:51.349907  346554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:16:51.349941  346554 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-550225 NodeName:ha-550225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:16:51.350123  346554 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-550225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:16:51.350149  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:16:51.350220  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:16:51.362455  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:51.362590  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:16:51.362683  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:16:51.370716  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:16:51.370824  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 07:16:51.378562  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:16:51.392384  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:16:51.405890  346554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1002 07:16:51.418852  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:16:51.431748  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:16:51.435456  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.445200  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:51.564279  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:16:51.580309  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.2
	I1002 07:16:51.580335  346554 certs.go:195] generating shared ca certs ...
	I1002 07:16:51.580352  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:51.580577  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:16:51.580643  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:16:51.580658  346554 certs.go:257] generating profile certs ...
	I1002 07:16:51.580760  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:16:51.580851  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa
	I1002 07:16:51.580915  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:16:51.580931  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:16:51.580960  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:16:51.580981  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:16:51.581001  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:16:51.581029  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:16:51.581060  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:16:51.581082  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:16:51.581099  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:16:51.581172  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:16:51.581223  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:16:51.581238  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:16:51.581269  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:16:51.581323  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:16:51.581355  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:16:51.581425  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:51.581476  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.581497  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.581511  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.582046  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:16:51.608528  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:16:51.630032  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:16:51.651693  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:16:51.672816  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:16:51.694334  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:16:51.713045  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:16:51.734929  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:16:51.759074  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:16:51.783798  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:16:51.810129  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:16:51.829572  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:16:51.844038  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:16:51.850521  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:16:51.859107  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863052  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863200  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.905139  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:16:51.915686  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:16:51.924646  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928631  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928697  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.970474  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:16:51.979037  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:16:51.988282  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992329  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992400  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:16:52.034608  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:16:52.043437  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:16:52.047807  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:16:52.090171  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:16:52.132189  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:16:52.173672  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:16:52.215246  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:16:52.259493  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:16:52.303359  346554 kubeadm.go:400] StartCluster: {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:52.303541  346554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:16:52.303637  346554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:16:52.411948  346554 cri.go:89] found id: ""
	I1002 07:16:52.412087  346554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:16:52.423926  346554 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:16:52.423985  346554 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:16:52.424072  346554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:16:52.435971  346554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:52.436519  346554 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-550225" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.436691  346554 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-550225" cluster setting kubeconfig missing "ha-550225" context setting]
	I1002 07:16:52.436999  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.437624  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:16:52.438178  346554 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:16:52.438372  346554 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:16:52.438396  346554 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:16:52.438439  346554 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:16:52.438479  346554 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:16:52.438242  346554 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:16:52.438946  346554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:16:52.453843  346554 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:16:52.453908  346554 kubeadm.go:601] duration metric: took 29.902711ms to restartPrimaryControlPlane
	I1002 07:16:52.454041  346554 kubeadm.go:402] duration metric: took 150.691034ms to StartCluster
	I1002 07:16:52.454081  346554 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.454172  346554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.454859  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.455192  346554 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:16:52.455245  346554 start.go:241] waiting for startup goroutines ...
	I1002 07:16:52.455279  346554 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:16:52.455778  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.480332  346554 out.go:179] * Enabled addons: 
	I1002 07:16:52.484238  346554 addons.go:514] duration metric: took 28.941955ms for enable addons: enabled=[]
	I1002 07:16:52.484336  346554 start.go:246] waiting for cluster config update ...
	I1002 07:16:52.484369  346554 start.go:255] writing updated cluster config ...
	I1002 07:16:52.488274  346554 out.go:203] 
	I1002 07:16:52.492458  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.492645  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.496127  346554 out.go:179] * Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	I1002 07:16:52.499195  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:52.502435  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:52.505497  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:52.505566  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:52.505677  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:52.505807  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:52.505838  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:52.506003  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.530361  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:52.530380  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:52.530392  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:52.530415  346554 start.go:360] acquireMachinesLock for ha-550225-m02: {Name:mk11ef625bc214163cbeacdb736ddec4214a8374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:52.530475  346554 start.go:364] duration metric: took 37.3µs to acquireMachinesLock for "ha-550225-m02"
	I1002 07:16:52.530499  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:52.530506  346554 fix.go:54] fixHost starting: m02
	I1002 07:16:52.530790  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:52.559198  346554 fix.go:112] recreateIfNeeded on ha-550225-m02: state=Stopped err=<nil>
	W1002 07:16:52.559226  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:52.563143  346554 out.go:252] * Restarting existing docker container for "ha-550225-m02" ...
	I1002 07:16:52.563247  346554 cli_runner.go:164] Run: docker start ha-550225-m02
	I1002 07:16:52.985736  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:53.019972  346554 kic.go:430] container "ha-550225-m02" state is running.
	I1002 07:16:53.020350  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:53.045172  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:53.045437  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:53.045501  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:53.087166  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:53.087519  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:53.087528  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:53.088138  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45188->127.0.0.1:33193: read: connection reset by peer
	I1002 07:16:56.311713  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.311782  346554 ubuntu.go:182] provisioning hostname "ha-550225-m02"
	I1002 07:16:56.311878  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.344609  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.344917  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.344929  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225-m02 && echo "ha-550225-m02" | sudo tee /etc/hostname
	I1002 07:16:56.639669  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.639788  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.668649  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.668967  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.668991  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:56.892812  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:56.892848  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:56.892865  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:56.892886  346554 provision.go:84] configureAuth start
	I1002 07:16:56.892966  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:56.931268  346554 provision.go:143] copyHostCerts
	I1002 07:16:56.931313  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931346  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:56.931357  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931436  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:56.931520  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931541  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:56.931548  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931576  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:56.931619  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931640  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:56.931645  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931673  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:56.931727  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225-m02 san=[127.0.0.1 192.168.49.3 ha-550225-m02 localhost minikube]
	I1002 07:16:57.380087  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:57.380161  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:57.380209  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.399377  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:57.503607  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:57.503674  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:57.534864  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:57.534935  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:16:57.579624  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:57.579686  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:57.613798  346554 provision.go:87] duration metric: took 720.891298ms to configureAuth
	I1002 07:16:57.613866  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:57.614125  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:57.614268  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.655334  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:57.655649  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:57.655669  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:58.296218  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:58.296241  346554 machine.go:96] duration metric: took 5.250794733s to provisionDockerMachine
	I1002 07:16:58.296266  346554 start.go:293] postStartSetup for "ha-550225-m02" (driver="docker")
	I1002 07:16:58.296279  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:58.296361  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:58.296407  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.334246  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.454625  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:58.462912  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:58.462946  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:58.462957  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:58.463024  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:58.463132  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:58.463146  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:58.463245  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:58.476350  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:58.502934  346554 start.go:296] duration metric: took 206.651168ms for postStartSetup
	I1002 07:16:58.503074  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:58.503140  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.541010  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.704044  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:58.724725  346554 fix.go:56] duration metric: took 6.194210695s for fixHost
	I1002 07:16:58.724751  346554 start.go:83] releasing machines lock for "ha-550225-m02", held for 6.194264053s
	I1002 07:16:58.724830  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:58.757236  346554 out.go:179] * Found network options:
	I1002 07:16:58.760259  346554 out.go:179]   - NO_PROXY=192.168.49.2
	W1002 07:16:58.763701  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	W1002 07:16:58.763752  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	I1002 07:16:58.763820  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:58.763852  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:58.763870  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.763907  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.799805  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.800051  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:59.297366  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:59.320265  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:59.320354  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:59.335012  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:59.335039  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:59.335070  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:59.335161  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:59.357972  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:59.378445  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:59.378521  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:59.402692  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:59.423049  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:59.777657  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:17:00.088553  346554 docker.go:234] disabling docker service ...
	I1002 07:17:00.088656  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:17:00.130593  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:17:00.210008  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:17:00.633988  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:17:01.021589  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:17:01.054167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:17:01.092894  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:17:01.092980  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.111830  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:17:01.111928  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.139965  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.151897  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.168595  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:17:01.186410  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.204646  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.221763  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.236700  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:17:01.257944  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:17:01.272835  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:17:01.618372  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:18:32.051852  346554 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.433435555s)
	I1002 07:18:32.051878  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:18:32.051938  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:18:32.056156  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:18:32.056222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:18:32.060117  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:18:32.088770  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:18:32.088860  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.119432  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.154051  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:18:32.156909  346554 out.go:179]   - env NO_PROXY=192.168.49.2
	I1002 07:18:32.159957  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:18:32.177164  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:18:32.181230  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:32.191471  346554 mustload.go:65] Loading cluster: ha-550225
	I1002 07:18:32.191729  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:32.191999  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:18:32.209130  346554 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:18:32.209416  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.3
	I1002 07:18:32.209433  346554 certs.go:195] generating shared ca certs ...
	I1002 07:18:32.209448  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:18:32.209574  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:18:32.209622  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:18:32.209635  346554 certs.go:257] generating profile certs ...
	I1002 07:18:32.209712  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:18:32.209761  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.e172f685
	I1002 07:18:32.209802  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:18:32.209816  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:18:32.209829  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:18:32.209843  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:18:32.209855  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:18:32.209869  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:18:32.209883  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:18:32.209898  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:18:32.209908  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:18:32.209964  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:18:32.209998  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:18:32.210010  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:18:32.210033  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:18:32.210061  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:18:32.210089  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:18:32.210137  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:18:32.210168  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.210187  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.210198  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.210261  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:18:32.227689  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:18:32.315413  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1002 07:18:32.319445  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1002 07:18:32.328111  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1002 07:18:32.331777  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1002 07:18:32.340081  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1002 07:18:32.343746  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1002 07:18:32.351558  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1002 07:18:32.354911  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1002 07:18:32.362878  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1002 07:18:32.366632  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1002 07:18:32.374581  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1002 07:18:32.378281  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1002 07:18:32.386552  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:18:32.405394  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:18:32.422759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:18:32.440360  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:18:32.457759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:18:32.475843  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:18:32.493288  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:18:32.510289  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:18:32.527991  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:18:32.545549  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:18:32.562952  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:18:32.580383  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1002 07:18:32.593477  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1002 07:18:32.606933  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1002 07:18:32.619772  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1002 07:18:32.634020  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1002 07:18:32.646873  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1002 07:18:32.659836  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1002 07:18:32.673417  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:18:32.679719  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:18:32.688081  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692003  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692135  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.733286  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:18:32.741334  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:18:32.749624  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753431  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753505  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.794364  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:18:32.802247  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:18:32.810290  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813847  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813927  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.854739  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:18:32.862471  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:18:32.866281  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:18:32.907787  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:18:32.948617  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:18:32.989448  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:18:33.030881  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:18:33.074016  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:18:33.117026  346554 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1002 07:18:33.117170  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:18:33.117220  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:18:33.117288  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:18:33.133837  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:18:33.133931  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:18:33.134029  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:18:33.142503  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:18:33.142627  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1002 07:18:33.150436  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 07:18:33.163196  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:18:33.176800  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:18:33.191119  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:18:33.195012  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:33.205076  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.339361  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.353170  346554 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:18:33.353495  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:33.359500  346554 out.go:179] * Verifying Kubernetes components...
	I1002 07:18:33.362288  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.491257  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.505467  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1002 07:18:33.505560  346554 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1002 07:18:33.505989  346554 node_ready.go:35] waiting up to 6m0s for node "ha-550225-m02" to be "Ready" ...
	W1002 07:18:35.506749  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:38.010468  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:40.016084  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:42.506872  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:44.507212  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:47.007659  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:49.506544  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:51.506605  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:18:54.785251  346554 node_ready.go:49] node "ha-550225-m02" is "Ready"
	I1002 07:18:54.785285  346554 node_ready.go:38] duration metric: took 21.279267345s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:18:54.785300  346554 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:18:54.785382  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.286257  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.786278  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.285480  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.785495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.286432  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.786472  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.285596  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.786260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.286148  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.785674  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.286401  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.786468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.286310  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.786133  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.285476  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.285578  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.785477  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.285835  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.786152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.285495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.785558  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.285602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.785496  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.286468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.786358  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.286294  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.786349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.286208  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.786292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.285577  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.785589  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.286341  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.286415  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.786007  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.286205  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.786328  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.285849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.786397  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.285488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.785431  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.285445  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.785468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.285527  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.785637  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.285535  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.786137  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.286152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.786052  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.785522  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.285716  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.786849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.286372  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.786418  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.286092  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.786120  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.285506  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.785439  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.286469  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.785780  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.785611  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.286260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.785499  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.285509  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.785521  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.285762  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.786049  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.286329  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.785543  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.285473  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.786013  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.285818  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.785931  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.285557  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.786122  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:33.786216  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:33.819648  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:33.819668  346554 cri.go:89] found id: ""
	I1002 07:19:33.819678  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:33.819746  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.823889  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:33.823960  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:33.855251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:33.855272  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:33.855277  346554 cri.go:89] found id: ""
	I1002 07:19:33.855285  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:33.855351  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.858992  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.862888  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:33.862975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:33.894144  346554 cri.go:89] found id: ""
	I1002 07:19:33.894169  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.894178  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:33.894184  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:33.894243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:33.921104  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:33.921125  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:33.921130  346554 cri.go:89] found id: ""
	I1002 07:19:33.921137  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:33.921194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.925016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.928536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:33.928631  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:33.961082  346554 cri.go:89] found id: ""
	I1002 07:19:33.961111  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.961121  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:33.961127  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:33.961187  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:33.993876  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:33.993901  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:33.993906  346554 cri.go:89] found id: ""
	I1002 07:19:33.993916  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:33.993979  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.999741  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:34.004783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:34.004869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:34.034228  346554 cri.go:89] found id: ""
	I1002 07:19:34.034256  346554 logs.go:282] 0 containers: []
	W1002 07:19:34.034265  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:34.034275  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:34.034288  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:34.096737  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:34.096779  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:34.132301  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:34.132339  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:34.182701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:34.182737  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:34.217015  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:34.217044  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:34.232712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:34.232741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:34.652633  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:34.652655  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:34.652669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:34.681086  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:34.681118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:34.708033  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:34.708062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:34.793299  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:34.793407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:34.848620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:34.848649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:34.948533  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:34.948572  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.477483  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:37.488961  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:37.489035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:37.518325  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.518349  346554 cri.go:89] found id: ""
	I1002 07:19:37.518358  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:37.518419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.522140  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:37.522269  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:37.549073  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.549093  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.549098  346554 cri.go:89] found id: ""
	I1002 07:19:37.549105  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:37.549190  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.552869  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.556417  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:37.556497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:37.589096  346554 cri.go:89] found id: ""
	I1002 07:19:37.589122  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.589130  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:37.589137  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:37.589199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:37.615330  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:37.615354  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.615360  346554 cri.go:89] found id: ""
	I1002 07:19:37.615367  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:37.615424  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.619166  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.622673  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:37.622742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:37.648426  346554 cri.go:89] found id: ""
	I1002 07:19:37.648458  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.648467  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:37.648474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:37.648536  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:37.676515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:37.676536  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:37.676541  346554 cri.go:89] found id: ""
	I1002 07:19:37.676549  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:37.676605  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.680280  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.684478  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:37.684552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:37.710689  346554 cri.go:89] found id: ""
	I1002 07:19:37.710713  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.710722  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:37.710731  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:37.710741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:37.807134  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:37.807171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:37.877814  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:37.877839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:37.877853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.920820  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:37.920854  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.956765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:37.956802  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.985482  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:37.985510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:38.017517  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:38.017548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:38.100846  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:38.100884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:38.136290  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:38.136318  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:38.151732  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:38.151763  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:38.177792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:38.177822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:38.229226  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:38.229260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.756410  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:40.767378  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:40.767448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:40.799187  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:40.799205  346554 cri.go:89] found id: ""
	I1002 07:19:40.799213  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:40.799268  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.804369  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:40.804454  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:40.830559  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:40.830628  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:40.830652  346554 cri.go:89] found id: ""
	I1002 07:19:40.830679  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:40.830771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.835205  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.839714  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:40.839827  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:40.867014  346554 cri.go:89] found id: ""
	I1002 07:19:40.867039  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.867048  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:40.867054  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:40.867141  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:40.905810  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:40.905829  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:40.905835  346554 cri.go:89] found id: ""
	I1002 07:19:40.905842  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:40.905898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.909648  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.913397  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:40.913471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:40.940488  346554 cri.go:89] found id: ""
	I1002 07:19:40.940511  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.940520  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:40.940526  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:40.940585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:40.968408  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:40.968429  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.968439  346554 cri.go:89] found id: ""
	I1002 07:19:40.968447  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:40.968503  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.972336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.976070  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:40.976163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:41.010288  346554 cri.go:89] found id: ""
	I1002 07:19:41.010318  346554 logs.go:282] 0 containers: []
	W1002 07:19:41.010328  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:41.010338  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:41.010353  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:41.058706  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:41.058741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:41.085223  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:41.085252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:41.117537  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:41.117564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:41.218224  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:41.218265  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:41.234686  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:41.234727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:41.270240  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:41.270276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:41.321885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:41.321922  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:41.350649  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:41.350684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:41.382710  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:41.382740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:41.465872  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:41.465911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:41.547196  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:41.547220  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:41.547234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.074126  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:44.087746  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:44.087861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:44.116198  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.116223  346554 cri.go:89] found id: ""
	I1002 07:19:44.116232  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:44.116290  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.120227  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:44.120325  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:44.146916  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.146943  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.146948  346554 cri.go:89] found id: ""
	I1002 07:19:44.146955  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:44.147009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.151266  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.155925  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:44.156012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:44.190430  346554 cri.go:89] found id: ""
	I1002 07:19:44.190458  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.190467  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:44.190473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:44.190529  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:44.219366  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.219387  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.219392  346554 cri.go:89] found id: ""
	I1002 07:19:44.219400  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:44.219455  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.223324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.226924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:44.227000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:44.252543  346554 cri.go:89] found id: ""
	I1002 07:19:44.252566  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.252576  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:44.252583  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:44.252650  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:44.280466  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.280489  346554 cri.go:89] found id: ""
	I1002 07:19:44.280498  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:44.280559  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.284050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:44.284122  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:44.314223  346554 cri.go:89] found id: ""
	I1002 07:19:44.314250  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.314259  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:44.314269  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:44.314304  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.340933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:44.340965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.377320  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:44.377352  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.411349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:44.411377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:44.516647  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:44.516695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:44.585736  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:44.585771  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:44.585785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.629867  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:44.629909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.681709  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:44.681750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.710536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:44.710566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:44.801698  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:44.801744  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:44.834146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:44.834175  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:47.351602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:47.362458  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:47.362546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:47.391769  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.391792  346554 cri.go:89] found id: ""
	I1002 07:19:47.391802  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:47.391863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.395882  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:47.395971  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:47.428129  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:47.428151  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.428156  346554 cri.go:89] found id: ""
	I1002 07:19:47.428164  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:47.428225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.432313  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.436344  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:47.436415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:47.464208  346554 cri.go:89] found id: ""
	I1002 07:19:47.464230  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.464238  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:47.464244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:47.464302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:47.494674  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.494731  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:47.494773  346554 cri.go:89] found id: ""
	I1002 07:19:47.494800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:47.494885  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.499610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.503658  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:47.503779  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:47.532490  346554 cri.go:89] found id: ""
	I1002 07:19:47.532517  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.532527  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:47.532534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:47.532599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:47.565084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:47.565122  346554 cri.go:89] found id: ""
	I1002 07:19:47.565131  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:47.565231  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.569404  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:47.569483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:47.597243  346554 cri.go:89] found id: ""
	I1002 07:19:47.597266  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.597275  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:47.597284  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:47.597294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:47.693710  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:47.693748  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:47.771715  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:47.771739  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:47.771752  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.810005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:47.810090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.890792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:47.890824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.977230  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:47.977271  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:48.018612  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:48.018643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:48.105364  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:48.105401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:48.124841  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:48.124870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:48.193027  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:48.193069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:48.239251  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:48.239279  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:50.782662  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:50.794011  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:50.794105  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:50.838191  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:50.838216  346554 cri.go:89] found id: ""
	I1002 07:19:50.838225  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:50.838286  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.842655  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:50.842755  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:50.891807  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:50.891833  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:50.891839  346554 cri.go:89] found id: ""
	I1002 07:19:50.891847  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:50.891964  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.899196  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.904048  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:50.904143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:50.939603  346554 cri.go:89] found id: ""
	I1002 07:19:50.939626  346554 logs.go:282] 0 containers: []
	W1002 07:19:50.939635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:50.939641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:50.939735  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:50.971030  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:50.971053  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:50.971059  346554 cri.go:89] found id: ""
	I1002 07:19:50.971067  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:50.971179  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.975612  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.980140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:50.980242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:51.025029  346554 cri.go:89] found id: ""
	I1002 07:19:51.025055  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.025064  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:51.025071  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:51.025186  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:51.058743  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.058764  346554 cri.go:89] found id: ""
	I1002 07:19:51.058772  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:51.058862  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:51.064931  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:51.065035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:51.101431  346554 cri.go:89] found id: ""
	I1002 07:19:51.101462  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.101486  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:51.101498  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:51.101531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:51.126461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:51.126494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:51.217174  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:51.217200  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:51.217216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:51.279369  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:51.279449  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:51.337216  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:51.337253  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:51.425630  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:51.425669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:51.528560  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:51.528601  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:51.556690  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:51.556719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:51.600118  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:51.600251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:51.632616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:51.632650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.662904  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:51.662935  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.196274  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:54.207476  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:54.207546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:54.238643  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.238664  346554 cri.go:89] found id: ""
	I1002 07:19:54.238673  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:54.238729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.242382  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:54.242456  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:54.274345  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.274377  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.274383  346554 cri.go:89] found id: ""
	I1002 07:19:54.274390  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:54.274451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.278686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.283146  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:54.283225  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:54.315609  346554 cri.go:89] found id: ""
	I1002 07:19:54.315635  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.315645  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:54.315652  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:54.315718  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:54.343684  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.343709  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.343715  346554 cri.go:89] found id: ""
	I1002 07:19:54.343723  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:54.343789  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.347649  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.351327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:54.351428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:54.380301  346554 cri.go:89] found id: ""
	I1002 07:19:54.380336  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.380346  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:54.380353  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:54.380440  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:54.413081  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:54.413105  346554 cri.go:89] found id: ""
	I1002 07:19:54.413114  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:54.413172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.417107  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:54.417181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:54.450903  346554 cri.go:89] found id: ""
	I1002 07:19:54.450930  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.450947  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:54.450957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:54.450972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:54.551509  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:54.551550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:54.567991  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:54.568018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:54.641344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:54.641366  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:54.641403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.677557  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:54.677592  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.742382  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:54.742417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:54.830648  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:54.830681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.866699  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:54.866727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.893138  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:54.893166  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.942885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:54.942920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.977070  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:54.977098  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.528866  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:57.540731  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:57.540803  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:57.571921  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:57.571945  346554 cri.go:89] found id: ""
	I1002 07:19:57.571954  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:57.572028  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.575942  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:57.576018  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:57.604185  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.604219  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.604224  346554 cri.go:89] found id: ""
	I1002 07:19:57.604232  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:57.604326  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.608202  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.611833  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:57.611912  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:57.640401  346554 cri.go:89] found id: ""
	I1002 07:19:57.640431  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.640440  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:57.640447  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:57.640519  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:57.671538  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:57.671560  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:57.671565  346554 cri.go:89] found id: ""
	I1002 07:19:57.671572  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:57.671629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.675430  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.679760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:57.679837  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:57.707483  346554 cri.go:89] found id: ""
	I1002 07:19:57.707511  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.707521  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:57.707527  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:57.707592  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:57.736308  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.736330  346554 cri.go:89] found id: ""
	I1002 07:19:57.736338  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:57.736407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.740334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:57.740505  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:57.771488  346554 cri.go:89] found id: ""
	I1002 07:19:57.771558  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.771575  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:57.771585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:57.771599  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.824974  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:57.825013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.862787  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:57.862825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.891348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:57.891374  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:57.923682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:57.923711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:57.996115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:57.996139  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:57.996155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:58.033126  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:58.033198  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:58.106377  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:58.106415  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:58.139224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:58.139252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:58.226478  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:58.226525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:58.331297  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:58.331338  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:00.847448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:00.859829  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:00.859905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:00.887965  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:00.888039  346554 cri.go:89] found id: ""
	I1002 07:20:00.888063  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:00.888133  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.892548  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:00.892623  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:00.922567  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:00.922586  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:00.922591  346554 cri.go:89] found id: ""
	I1002 07:20:00.922598  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:00.922653  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.926435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.930250  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:00.930339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:00.959728  346554 cri.go:89] found id: ""
	I1002 07:20:00.959759  346554 logs.go:282] 0 containers: []
	W1002 07:20:00.959769  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:00.959777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:00.959861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:00.988254  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:00.988317  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:00.988338  346554 cri.go:89] found id: ""
	I1002 07:20:00.988365  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:00.988466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.993016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.996699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:00.996818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:01.024791  346554 cri.go:89] found id: ""
	I1002 07:20:01.024815  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.024823  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:01.024849  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:01.024931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:01.056703  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.056728  346554 cri.go:89] found id: ""
	I1002 07:20:01.056737  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:01.056820  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:01.061200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:01.061302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:01.092652  346554 cri.go:89] found id: ""
	I1002 07:20:01.092680  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.092690  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:01.092701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:01.092715  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.121048  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:01.121084  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:01.227967  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:01.228007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:01.246697  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:01.246728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:01.299528  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:01.299606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:01.329789  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:01.329875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:01.412310  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:01.412348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:01.449621  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:01.449651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:01.528807  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:01.528832  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:01.528848  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:01.557543  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:01.557575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:01.606902  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:01.607007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.163648  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:04.175704  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:04.175798  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:04.202895  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.202920  346554 cri.go:89] found id: ""
	I1002 07:20:04.202929  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:04.202988  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.206773  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:04.206847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:04.237461  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.237484  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.237490  346554 cri.go:89] found id: ""
	I1002 07:20:04.237497  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:04.237551  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.241192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.244646  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:04.244721  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:04.271145  346554 cri.go:89] found id: ""
	I1002 07:20:04.271172  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.271181  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:04.271188  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:04.271290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:04.301758  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.301787  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.301792  346554 cri.go:89] found id: ""
	I1002 07:20:04.301800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:04.301858  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.305658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.309360  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:04.309437  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:04.339291  346554 cri.go:89] found id: ""
	I1002 07:20:04.339317  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.339339  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:04.339347  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:04.339417  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:04.366771  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.366841  346554 cri.go:89] found id: ""
	I1002 07:20:04.366866  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:04.366961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.371032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:04.371213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:04.396810  346554 cri.go:89] found id: ""
	I1002 07:20:04.396889  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.396905  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:04.396916  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:04.396933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:04.414258  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:04.414291  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.478315  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:04.478395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.536808  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:04.536847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.564995  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:04.565025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.592902  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:04.592931  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:04.671813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:04.671849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:04.710652  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:04.710684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:04.820627  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:04.820664  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:04.897187  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:04.897212  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:04.897229  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.936329  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:04.936358  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.496901  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:07.514473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:07.514547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:07.540993  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.541017  346554 cri.go:89] found id: ""
	I1002 07:20:07.541025  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:07.541109  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.545015  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:07.545090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:07.572646  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.572670  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.572675  346554 cri.go:89] found id: ""
	I1002 07:20:07.572683  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:07.572763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.576707  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.580612  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:07.580684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:07.606885  346554 cri.go:89] found id: ""
	I1002 07:20:07.606909  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.606917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:07.606923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:07.606980  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:07.633971  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.634051  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.634072  346554 cri.go:89] found id: ""
	I1002 07:20:07.634115  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:07.634212  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.638009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.641489  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:07.641558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:07.669226  346554 cri.go:89] found id: ""
	I1002 07:20:07.669252  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.669262  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:07.669269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:07.669328  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:07.697084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.697110  346554 cri.go:89] found id: ""
	I1002 07:20:07.697119  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:07.697218  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.702023  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:07.702125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:07.729244  346554 cri.go:89] found id: ""
	I1002 07:20:07.729270  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.729279  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:07.729289  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:07.729305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.774187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:07.774226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.840113  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:07.840153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.873716  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:07.873757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:07.891261  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:07.891289  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.916233  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:07.916263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.952299  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:07.952332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.986719  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:07.986746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:08.071303  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:08.071345  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:08.108002  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:08.108028  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:08.210536  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:08.210576  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:08.294093  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:10.795316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:10.809081  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:10.809162  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:10.842834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:10.842857  346554 cri.go:89] found id: ""
	I1002 07:20:10.842866  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:10.842923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.846661  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:10.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:10.885119  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:10.885154  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:10.885160  346554 cri.go:89] found id: ""
	I1002 07:20:10.885167  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:10.885227  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.888993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.892673  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:10.892745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:10.919884  346554 cri.go:89] found id: ""
	I1002 07:20:10.919910  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.919920  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:10.919926  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:10.919986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:10.948791  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:10.948813  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:10.948818  346554 cri.go:89] found id: ""
	I1002 07:20:10.948832  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:10.948888  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.952760  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.956362  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:10.956465  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:10.984495  346554 cri.go:89] found id: ""
	I1002 07:20:10.984518  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.984528  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:10.984535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:10.984636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:11.017757  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.017840  346554 cri.go:89] found id: ""
	I1002 07:20:11.017854  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:11.017923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:11.022016  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:11.022121  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:11.049783  346554 cri.go:89] found id: ""
	I1002 07:20:11.049807  346554 logs.go:282] 0 containers: []
	W1002 07:20:11.049816  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:11.049826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:11.049858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:11.130029  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:11.130050  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:11.130065  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:11.158585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:11.158617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:11.206663  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:11.206698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:11.251780  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:11.251812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:11.320488  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:11.320524  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:11.401025  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:11.401061  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:11.509831  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:11.509925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:11.528908  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:11.528984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:11.560309  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:11.560340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.587476  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:11.587505  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.117921  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:14.129181  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:14.129256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:14.155142  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.155165  346554 cri.go:89] found id: ""
	I1002 07:20:14.155174  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:14.155234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.158996  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:14.159072  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:14.187368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.187439  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.187451  346554 cri.go:89] found id: ""
	I1002 07:20:14.187459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:14.187516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.191550  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.195394  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:14.195489  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:14.221702  346554 cri.go:89] found id: ""
	I1002 07:20:14.221731  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.221741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:14.221748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:14.221805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:14.250745  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.250768  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:14.250774  346554 cri.go:89] found id: ""
	I1002 07:20:14.250781  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:14.250840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.254464  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.257656  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:14.257732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:14.287657  346554 cri.go:89] found id: ""
	I1002 07:20:14.287684  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.287693  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:14.287699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:14.287763  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:14.317647  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.317670  346554 cri.go:89] found id: ""
	I1002 07:20:14.317680  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:14.317738  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.321550  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:14.321664  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:14.347420  346554 cri.go:89] found id: ""
	I1002 07:20:14.347445  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.347455  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:14.347465  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:14.347476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:14.428069  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:14.428106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.482408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:14.482447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.534003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:14.534036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.587616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:14.587652  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.615153  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:14.615189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.649482  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:14.649517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:14.745400  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:14.745440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:14.765273  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:14.765307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:14.841087  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:14.841109  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:14.841123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.867206  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:14.867236  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.396729  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:17.407809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:17.407882  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:17.435626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.435649  346554 cri.go:89] found id: ""
	I1002 07:20:17.435667  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:17.435729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.440093  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:17.440173  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:17.481710  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.481732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.481738  346554 cri.go:89] found id: ""
	I1002 07:20:17.481745  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:17.481808  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.488857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.492676  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:17.492748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:17.535179  346554 cri.go:89] found id: ""
	I1002 07:20:17.535251  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.535277  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:17.535317  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:17.535404  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:17.567305  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:17.567330  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.567335  346554 cri.go:89] found id: ""
	I1002 07:20:17.567343  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:17.567405  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.572504  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.576436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:17.576540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:17.604459  346554 cri.go:89] found id: ""
	I1002 07:20:17.604489  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.604498  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:17.604504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:17.604568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:17.632230  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.632254  346554 cri.go:89] found id: ""
	I1002 07:20:17.632263  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:17.632352  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.636309  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:17.636416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:17.664031  346554 cri.go:89] found id: ""
	I1002 07:20:17.664058  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.664068  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:17.664078  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:17.664090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.690836  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:17.690911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.720348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:17.720376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:17.752215  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:17.752295  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:17.855749  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:17.855789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:17.872293  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:17.872320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.923506  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:17.923540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.971187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:17.971220  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:18.041592  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:18.041630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:18.085650  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:18.085682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:18.171333  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:18.171372  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:18.244409  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:20.746282  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:20.757663  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:20.757743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:20.787729  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:20.787751  346554 cri.go:89] found id: ""
	I1002 07:20:20.787760  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:20.787845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.792330  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:20.792424  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:20.829800  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:20.829824  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:20.829830  346554 cri.go:89] found id: ""
	I1002 07:20:20.829838  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:20.829899  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.833952  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.837642  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:20.837723  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:20.867702  346554 cri.go:89] found id: ""
	I1002 07:20:20.867725  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.867734  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:20.867740  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:20.867830  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:20.908994  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:20.909016  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:20.909022  346554 cri.go:89] found id: ""
	I1002 07:20:20.909029  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:20.909085  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.913045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.916567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:20.916643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:20.947545  346554 cri.go:89] found id: ""
	I1002 07:20:20.947571  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.947581  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:20.947588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:20.947651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:20.980904  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:20.980984  346554 cri.go:89] found id: ""
	I1002 07:20:20.980999  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:20.981082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.984909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:20.984982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:21.020855  346554 cri.go:89] found id: ""
	I1002 07:20:21.020878  346554 logs.go:282] 0 containers: []
	W1002 07:20:21.020887  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:21.020896  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:21.020907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:21.117602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:21.117638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:21.192022  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:21.192043  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:21.192057  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:21.276022  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:21.276060  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:21.308782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:21.308822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:21.396093  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:21.396132  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:21.438867  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:21.438900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:21.463876  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:21.463906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:21.500802  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:21.500843  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:21.550471  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:21.550508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:21.590310  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:21.590349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.119676  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:24.131693  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:24.131783  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:24.163845  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.163870  346554 cri.go:89] found id: ""
	I1002 07:20:24.163879  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:24.163939  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.167667  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:24.167742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:24.195635  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.195658  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:24.195664  346554 cri.go:89] found id: ""
	I1002 07:20:24.195672  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:24.195731  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.199786  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.204099  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:24.204199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:24.233690  346554 cri.go:89] found id: ""
	I1002 07:20:24.233716  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.233726  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:24.233733  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:24.233790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:24.262505  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.262565  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.262586  346554 cri.go:89] found id: ""
	I1002 07:20:24.262614  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:24.262691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.266650  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.270417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:24.270511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:24.297687  346554 cri.go:89] found id: ""
	I1002 07:20:24.297713  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.297723  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:24.297729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:24.297790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:24.325175  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.325197  346554 cri.go:89] found id: ""
	I1002 07:20:24.325205  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:24.325284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.329310  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:24.329399  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:24.358432  346554 cri.go:89] found id: ""
	I1002 07:20:24.358458  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.358468  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:24.358477  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:24.358489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.418997  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:24.419034  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.449127  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:24.449155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:24.545814  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:24.545853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:24.561748  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:24.561777  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:24.632202  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:24.632226  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:24.632239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.662637  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:24.662668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:24.740789  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:24.740830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:24.773325  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:24.773357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.807399  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:24.807428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.853933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:24.853972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.396082  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:27.406955  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:27.407027  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:27.435147  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.435171  346554 cri.go:89] found id: ""
	I1002 07:20:27.435180  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:27.435238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.440669  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:27.440745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:27.467109  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.467176  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.467196  346554 cri.go:89] found id: ""
	I1002 07:20:27.467205  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:27.467275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.471217  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.474815  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:27.474888  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:27.503111  346554 cri.go:89] found id: ""
	I1002 07:20:27.503136  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.503145  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:27.503152  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:27.503222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:27.540213  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.540253  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.540260  346554 cri.go:89] found id: ""
	I1002 07:20:27.540276  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:27.540359  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.544590  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.548529  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:27.548605  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:27.577677  346554 cri.go:89] found id: ""
	I1002 07:20:27.577746  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.577772  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:27.577798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:27.577892  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:27.607310  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:27.607329  346554 cri.go:89] found id: ""
	I1002 07:20:27.607337  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:27.607393  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.611619  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:27.611690  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:27.647844  346554 cri.go:89] found id: ""
	I1002 07:20:27.647872  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.647882  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:27.647892  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:27.647905  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:27.723377  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:27.723400  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:27.723419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.750902  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:27.750932  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.804228  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:27.804267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.866989  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:27.867068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.895361  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:27.895393  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:28.004869  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:28.004912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:28.030605  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:28.030637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:28.090494  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:28.090531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:28.120915  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:28.120953  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:28.213702  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:28.213740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:30.746147  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:30.758010  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:30.758090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:30.789909  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:30.789936  346554 cri.go:89] found id: ""
	I1002 07:20:30.789945  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:30.790004  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.794321  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:30.794407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:30.823421  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:30.823445  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:30.823451  346554 cri.go:89] found id: ""
	I1002 07:20:30.823459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:30.823520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.827486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.831334  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:30.831416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:30.857968  346554 cri.go:89] found id: ""
	I1002 07:20:30.857996  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.858005  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:30.858012  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:30.858073  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:30.885972  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:30.885997  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:30.886002  346554 cri.go:89] found id: ""
	I1002 07:20:30.886010  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:30.886074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.891710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.897102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:30.897174  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:30.928917  346554 cri.go:89] found id: ""
	I1002 07:20:30.928944  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.928953  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:30.928960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:30.929079  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:30.957428  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:30.957456  346554 cri.go:89] found id: ""
	I1002 07:20:30.957465  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:30.957524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.961555  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:30.961638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:30.991607  346554 cri.go:89] found id: ""
	I1002 07:20:30.991644  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.991654  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:30.991664  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:30.991682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:31.034696  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:31.034732  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:31.095475  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:31.095521  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:31.124509  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:31.124543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:31.164950  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:31.164982  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:31.242438  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:31.242461  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:31.242475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:31.288791  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:31.288829  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:31.324555  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:31.324590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:31.358683  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:31.358775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:31.442957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:31.443002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:31.546184  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:31.546226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:34.062520  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:34.074346  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:34.074429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:34.104094  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.104116  346554 cri.go:89] found id: ""
	I1002 07:20:34.104124  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:34.104184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.108168  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:34.108242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:34.134780  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.134803  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.134808  346554 cri.go:89] found id: ""
	I1002 07:20:34.134816  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:34.134873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.140158  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.144631  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:34.144709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:34.171174  346554 cri.go:89] found id: ""
	I1002 07:20:34.171197  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.171209  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:34.171216  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:34.171279  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:34.201197  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.201265  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.201279  346554 cri.go:89] found id: ""
	I1002 07:20:34.201289  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:34.201358  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.205487  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.209274  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:34.209371  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:34.236797  346554 cri.go:89] found id: ""
	I1002 07:20:34.236823  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.236832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:34.236839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:34.236899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:34.268130  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.268153  346554 cri.go:89] found id: ""
	I1002 07:20:34.268163  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:34.268221  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.272288  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:34.272494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:34.303012  346554 cri.go:89] found id: ""
	I1002 07:20:34.303036  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.303046  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:34.303057  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:34.303069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.330987  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:34.331016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:34.409294  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:34.409332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:34.444890  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:34.444921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:34.529848  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:34.529873  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:34.529887  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.576746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:34.576783  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.617959  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:34.617994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.680077  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:34.680116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.709769  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:34.709801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.741411  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:34.741440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:34.841059  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:34.841096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.359292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:37.370946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:37.371032  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:37.399137  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.399162  346554 cri.go:89] found id: ""
	I1002 07:20:37.399171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:37.399230  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.403338  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:37.403412  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:37.430753  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.430777  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.430782  346554 cri.go:89] found id: ""
	I1002 07:20:37.430790  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:37.430846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.434756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.440208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:37.440282  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:37.466624  346554 cri.go:89] found id: ""
	I1002 07:20:37.466708  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.466741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:37.466763  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:37.466859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:37.494022  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.494043  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:37.494049  346554 cri.go:89] found id: ""
	I1002 07:20:37.494057  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:37.494137  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.498098  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.502412  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:37.502500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:37.535920  346554 cri.go:89] found id: ""
	I1002 07:20:37.535947  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.535956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:37.535963  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:37.536022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:37.562970  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.562994  346554 cri.go:89] found id: ""
	I1002 07:20:37.563004  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:37.563062  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.567000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:37.567077  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:37.595796  346554 cri.go:89] found id: ""
	I1002 07:20:37.595823  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.595832  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:37.595842  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:37.595875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.622318  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:37.622347  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:37.698567  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:37.698606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:37.730294  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:37.730323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.746780  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:37.746819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.774051  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:37.774082  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.842657  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:37.842692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.879058  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:37.879101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.958213  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:37.958255  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:38.066523  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:38.066564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:38.140589  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:38.140614  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:38.140628  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.668101  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:40.680533  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:40.680613  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:40.709182  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.709201  346554 cri.go:89] found id: ""
	I1002 07:20:40.709217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:40.709275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.714063  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:40.714131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:40.741940  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:40.741960  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:40.741965  346554 cri.go:89] found id: ""
	I1002 07:20:40.741972  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:40.742030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.746103  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.749819  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:40.749890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:40.779806  346554 cri.go:89] found id: ""
	I1002 07:20:40.779869  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.779893  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:40.779918  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:40.779999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:40.818846  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:40.818910  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.818930  346554 cri.go:89] found id: ""
	I1002 07:20:40.818956  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:40.819034  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.825049  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.829111  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:40.829255  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:40.857000  346554 cri.go:89] found id: ""
	I1002 07:20:40.857070  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.857101  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:40.857116  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:40.857204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:40.890997  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:40.891021  346554 cri.go:89] found id: ""
	I1002 07:20:40.891030  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:40.891120  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.902062  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:40.902188  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:40.931155  346554 cri.go:89] found id: ""
	I1002 07:20:40.931192  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.931201  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:40.931258  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:40.931282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.968238  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:40.968267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:41.004537  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:41.004577  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:41.077656  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:41.077693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:41.110709  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:41.110738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:41.146808  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:41.146839  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:41.218315  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:41.218395  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:41.218476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:41.270106  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:41.270141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:41.300977  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:41.301007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:41.385349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:41.385387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:41.485614  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:41.485658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.002362  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:44.017480  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:44.017558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:44.055626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.055653  346554 cri.go:89] found id: ""
	I1002 07:20:44.055662  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:44.055736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.059917  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:44.059997  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:44.097033  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:44.097067  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.097072  346554 cri.go:89] found id: ""
	I1002 07:20:44.097079  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:44.097147  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.101257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.105790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:44.105890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:44.134184  346554 cri.go:89] found id: ""
	I1002 07:20:44.134213  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.134222  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:44.134229  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:44.134316  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:44.172910  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.172972  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.172992  346554 cri.go:89] found id: ""
	I1002 07:20:44.173019  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:44.173087  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.177020  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.181101  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:44.181189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:44.210050  346554 cri.go:89] found id: ""
	I1002 07:20:44.210072  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.210081  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:44.210088  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:44.210148  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:44.236942  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.236966  346554 cri.go:89] found id: ""
	I1002 07:20:44.236975  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:44.237032  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.240886  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:44.240968  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:44.267437  346554 cri.go:89] found id: ""
	I1002 07:20:44.267471  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.267482  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:44.267498  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:44.267522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.311617  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:44.311650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.371464  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:44.371502  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.401657  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:44.401685  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.429428  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:44.429458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.457332  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:44.457370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:44.542400  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:44.542441  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:44.576729  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:44.576808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:44.671950  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:44.671991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.688074  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:44.688102  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:44.772308  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:44.772331  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:44.772344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.326275  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:47.337461  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:47.337588  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:47.370813  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.370885  346554 cri.go:89] found id: ""
	I1002 07:20:47.370909  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:47.370985  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.375983  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:47.376102  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:47.408952  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.409021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.409046  346554 cri.go:89] found id: ""
	I1002 07:20:47.409075  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:47.409142  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.412894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.416604  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:47.416678  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:47.443724  346554 cri.go:89] found id: ""
	I1002 07:20:47.443746  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.443755  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:47.443761  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:47.443825  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:47.472814  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.472835  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.472840  346554 cri.go:89] found id: ""
	I1002 07:20:47.472848  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:47.472910  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.476853  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.481052  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:47.481125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:47.527292  346554 cri.go:89] found id: ""
	I1002 07:20:47.527316  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.527325  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:47.527331  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:47.527396  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:47.557465  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.557493  346554 cri.go:89] found id: ""
	I1002 07:20:47.557502  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:47.557573  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.561605  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:47.561776  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:47.592217  346554 cri.go:89] found id: ""
	I1002 07:20:47.592251  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.592261  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:47.592270  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:47.592282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:47.609667  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:47.609697  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.670961  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:47.670999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.701512  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:47.701543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.730463  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:47.730493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:47.813379  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:47.813403  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:47.813417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.839632  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:47.839663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.890767  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:47.890807  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.931484  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:47.931519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:48.013592  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:48.013683  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:48.048341  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:48.048371  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:50.660679  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:50.672098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:50.672208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:50.698977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.699002  346554 cri.go:89] found id: ""
	I1002 07:20:50.699012  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:50.699155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.703120  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:50.703197  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:50.731004  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:50.731030  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.731035  346554 cri.go:89] found id: ""
	I1002 07:20:50.731043  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:50.731134  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.735170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.739036  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:50.739228  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:50.765233  346554 cri.go:89] found id: ""
	I1002 07:20:50.765257  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.765267  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:50.765276  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:50.765337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:50.798825  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:50.798846  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:50.798851  346554 cri.go:89] found id: ""
	I1002 07:20:50.798858  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:50.798922  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.803023  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.806604  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:50.806684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:50.834561  346554 cri.go:89] found id: ""
	I1002 07:20:50.834595  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.834605  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:50.834612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:50.834685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:50.862616  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:50.862640  346554 cri.go:89] found id: ""
	I1002 07:20:50.862649  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:50.862719  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.866512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:50.866591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:50.894801  346554 cri.go:89] found id: ""
	I1002 07:20:50.894874  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.894898  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:50.894927  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:50.894970  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.922014  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:50.922093  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.963158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:50.963238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:51.041253  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:51.041298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:51.078068  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:51.078373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:51.109345  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:51.109379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:51.143553  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:51.143586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:51.160251  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:51.160287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:51.232331  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:51.232357  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:51.232370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:51.284859  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:51.284891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:51.366726  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:51.366764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:53.965349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:53.977241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:53.977365  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:54.007342  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.007370  346554 cri.go:89] found id: ""
	I1002 07:20:54.007379  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:54.007452  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.014154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:54.014243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:54.042738  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.042761  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.042767  346554 cri.go:89] found id: ""
	I1002 07:20:54.042787  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:54.042849  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.047324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.052426  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:54.052514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:54.092137  346554 cri.go:89] found id: ""
	I1002 07:20:54.092162  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.092171  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:54.092177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:54.092245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:54.123873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.123895  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.123900  346554 cri.go:89] found id: ""
	I1002 07:20:54.123908  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:54.123966  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.128307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.132643  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:54.132764  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:54.167072  346554 cri.go:89] found id: ""
	I1002 07:20:54.167173  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.167197  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:54.167223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:54.167317  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:54.201096  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.201124  346554 cri.go:89] found id: ""
	I1002 07:20:54.201133  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:54.201192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.205200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:54.205319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:54.232346  346554 cri.go:89] found id: ""
	I1002 07:20:54.232375  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.232384  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:54.232394  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:54.232424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:54.307053  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:54.307076  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:54.307120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.339765  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:54.339797  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.389419  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:54.389463  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.427898  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:54.427934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.459945  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:54.459979  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.495013  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:54.495049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:54.593488  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:54.593523  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:54.699166  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:54.699248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:54.715185  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:54.715217  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.790047  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:54.790081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.332703  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:57.343440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:57.343508  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:57.371159  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:57.371224  346554 cri.go:89] found id: ""
	I1002 07:20:57.371248  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:57.371325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.376379  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:57.376455  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:57.403394  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:57.403417  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:57.403423  346554 cri.go:89] found id: ""
	I1002 07:20:57.403431  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:57.403486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.407238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.410942  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:57.411033  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:57.438995  346554 cri.go:89] found id: ""
	I1002 07:20:57.439020  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.439029  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:57.439036  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:57.439133  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:57.471614  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.471639  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:57.471644  346554 cri.go:89] found id: ""
	I1002 07:20:57.471656  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:57.471714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.475670  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.479817  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:57.479927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:57.514129  346554 cri.go:89] found id: ""
	I1002 07:20:57.514152  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.514160  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:57.514166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:57.514229  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:57.540930  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.540954  346554 cri.go:89] found id: ""
	I1002 07:20:57.540963  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:57.541019  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.545166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:57.545246  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:57.580607  346554 cri.go:89] found id: ""
	I1002 07:20:57.580633  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.580643  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:57.580653  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:57.580682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:57.662349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:57.662389  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:57.761863  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:57.761900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.830325  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:57.830366  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.856569  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:57.856598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.888135  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:57.888164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:57.906242  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:57.906270  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:57.976993  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:57.977018  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:57.977033  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:58.011287  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:58.011323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:58.063746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:58.063782  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:58.114504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:58.114539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.655161  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:00.666760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:00.666847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:00.699194  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:00.699218  346554 cri.go:89] found id: ""
	I1002 07:21:00.699227  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:00.699283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.703475  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:00.703551  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:00.730837  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:00.730862  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:00.730867  346554 cri.go:89] found id: ""
	I1002 07:21:00.730874  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:00.730933  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.734900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.738704  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:00.738777  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:00.765809  346554 cri.go:89] found id: ""
	I1002 07:21:00.765832  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.765841  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:00.765847  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:00.765903  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:00.806888  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:00.806911  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.806916  346554 cri.go:89] found id: ""
	I1002 07:21:00.806924  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:00.806982  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.810980  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.815454  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:00.815527  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:00.843377  346554 cri.go:89] found id: ""
	I1002 07:21:00.843403  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.843413  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:00.843419  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:00.843480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:00.870064  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:00.870084  346554 cri.go:89] found id: ""
	I1002 07:21:00.870094  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:00.870150  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.874067  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:00.874142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:00.912375  346554 cri.go:89] found id: ""
	I1002 07:21:00.912400  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.912409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:00.912419  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:00.912437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:01.010660  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:01.010703  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:01.027564  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:01.027589  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:01.108980  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:01.109003  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:01.109017  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:01.140899  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:01.140925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:01.201677  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:01.201719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:01.249485  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:01.249516  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:01.310648  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:01.310682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:01.339591  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:01.339668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:01.368293  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:01.368363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:01.451526  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:01.451565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:03.985004  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:03.995665  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:03.995732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:04.038756  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.038786  346554 cri.go:89] found id: ""
	I1002 07:21:04.038796  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:04.038863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.042734  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:04.042813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:04.080960  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.080984  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.080990  346554 cri.go:89] found id: ""
	I1002 07:21:04.080998  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:04.081055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.085045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.088904  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:04.088984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:04.116470  346554 cri.go:89] found id: ""
	I1002 07:21:04.116495  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.116504  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:04.116511  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:04.116568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:04.143301  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.143324  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:04.143330  346554 cri.go:89] found id: ""
	I1002 07:21:04.143336  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:04.143392  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.149220  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.156754  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:04.156875  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:04.186088  346554 cri.go:89] found id: ""
	I1002 07:21:04.186115  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.186125  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:04.186131  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:04.186222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:04.213953  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.213978  346554 cri.go:89] found id: ""
	I1002 07:21:04.213987  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:04.214074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.220236  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:04.220339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:04.249797  346554 cri.go:89] found id: ""
	I1002 07:21:04.249825  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.249834  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:04.249876  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:04.249893  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:04.334427  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:04.334464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:04.365264  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:04.365294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:04.467641  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:04.467693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.495501  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:04.495532  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.553841  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:04.553879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.590884  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:04.590912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.618124  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:04.618157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:04.634781  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:04.634812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:04.712412  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:04.712440  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:04.712458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.772367  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:04.772405  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.313327  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:07.324335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:07.324410  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:07.352343  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.352367  346554 cri.go:89] found id: ""
	I1002 07:21:07.352376  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:07.352456  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.356634  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:07.356705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:07.384754  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.384778  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.384783  346554 cri.go:89] found id: ""
	I1002 07:21:07.384791  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:07.384871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.388840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.392572  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:07.392672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:07.418573  346554 cri.go:89] found id: ""
	I1002 07:21:07.418605  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.418615  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:07.418622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:07.418681  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:07.450415  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:07.450439  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.450445  346554 cri.go:89] found id: ""
	I1002 07:21:07.450466  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:07.450529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.454971  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.459463  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:07.459539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:07.488692  346554 cri.go:89] found id: ""
	I1002 07:21:07.488722  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.488730  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:07.488737  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:07.488799  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:07.520325  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.520350  346554 cri.go:89] found id: ""
	I1002 07:21:07.520359  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:07.520421  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.524256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:07.524330  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:07.549519  346554 cri.go:89] found id: ""
	I1002 07:21:07.549540  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.549548  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:07.549558  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:07.549569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:07.643274  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:07.643315  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:07.716156  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:07.716179  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:07.716195  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.743950  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:07.743980  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:07.830226  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:07.830266  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:07.847230  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:07.847260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.875839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:07.875908  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.937408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:07.937448  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.974391  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:07.974428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:08.044504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:08.044544  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:08.085844  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:08.085875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:10.619391  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:10.631035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:10.631208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:10.664959  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:10.664983  346554 cri.go:89] found id: ""
	I1002 07:21:10.664992  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:10.665070  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.668812  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:10.668884  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:10.695400  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:10.695424  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:10.695430  346554 cri.go:89] found id: ""
	I1002 07:21:10.695438  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:10.695526  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.699317  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.703430  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:10.703524  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:10.728859  346554 cri.go:89] found id: ""
	I1002 07:21:10.728883  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.728892  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:10.728898  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:10.728974  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:10.754882  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:10.754905  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.754911  346554 cri.go:89] found id: ""
	I1002 07:21:10.754918  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:10.754984  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.758686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.762139  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:10.762248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:10.787999  346554 cri.go:89] found id: ""
	I1002 07:21:10.788067  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.788092  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:10.788115  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:10.788204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:10.814729  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:10.814803  346554 cri.go:89] found id: ""
	I1002 07:21:10.814825  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:10.814914  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.818388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:10.818483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:10.845398  346554 cri.go:89] found id: ""
	I1002 07:21:10.845424  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.845433  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:10.845443  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:10.845482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.873199  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:10.873225  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:10.951572  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:10.951609  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:11.051035  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:11.051118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:11.130878  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:11.130909  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:11.130924  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:11.156885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:11.156920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:11.211573  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:11.211615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:11.272703  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:11.272742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:11.301304  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:11.301336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:11.342833  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:11.342861  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:11.360176  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:11.360204  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.902061  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:13.915871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:13.915935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:13.954412  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:13.954439  346554 cri.go:89] found id: ""
	I1002 07:21:13.954448  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:13.954513  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.959571  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:13.959655  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:13.994709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:13.994729  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.994735  346554 cri.go:89] found id: ""
	I1002 07:21:13.994743  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:13.994797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.999427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.003663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:14.003749  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:14.042653  346554 cri.go:89] found id: ""
	I1002 07:21:14.042680  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.042690  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:14.042696  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:14.042757  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:14.087595  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.087615  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.087620  346554 cri.go:89] found id: ""
	I1002 07:21:14.087628  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:14.087688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.092427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.096855  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:14.096920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:14.126816  346554 cri.go:89] found id: ""
	I1002 07:21:14.126843  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.126852  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:14.126858  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:14.126918  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:14.155318  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:14.155339  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.155344  346554 cri.go:89] found id: ""
	I1002 07:21:14.155351  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:14.155407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.159934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.164569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:14.164634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:14.209412  346554 cri.go:89] found id: ""
	I1002 07:21:14.209437  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.209449  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:14.209459  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:14.209471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:14.225995  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:14.226022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:14.263998  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:14.264027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:14.360121  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:14.360159  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:14.407199  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:14.407234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.434782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:14.434814  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:14.521080  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:14.521121  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:14.593104  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:14.593134  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:14.699269  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:14.699308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:14.786512  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:14.786535  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:14.786548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.869065  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:14.869109  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.900362  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:14.900454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.430222  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:17.442136  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:17.442212  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:17.468618  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:17.468642  346554 cri.go:89] found id: ""
	I1002 07:21:17.468664  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:17.468722  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.472407  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:17.472483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:17.500441  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:17.500462  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.500468  346554 cri.go:89] found id: ""
	I1002 07:21:17.500475  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:17.500534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.504574  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.511111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:17.511190  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:17.539180  346554 cri.go:89] found id: ""
	I1002 07:21:17.539208  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.539217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:17.539224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:17.539283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:17.567616  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.567641  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:17.567647  346554 cri.go:89] found id: ""
	I1002 07:21:17.567654  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:17.567710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.571727  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.575519  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:17.575603  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:17.601045  346554 cri.go:89] found id: ""
	I1002 07:21:17.601070  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.601079  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:17.601086  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:17.601143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:17.628358  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.628379  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:17.628384  346554 cri.go:89] found id: ""
	I1002 07:21:17.628391  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:17.628479  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.632534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.636208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:17.636286  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:17.662364  346554 cri.go:89] found id: ""
	I1002 07:21:17.662389  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.662398  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:17.662408  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:17.662419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:17.756609  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:17.756643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:17.772784  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:17.772821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:17.854603  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:17.854625  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:17.854639  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.890480  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:17.890513  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.955720  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:17.955755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.986877  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:17.986906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:18.065618  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:18.065659  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:18.111257  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:18.111287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:18.141121  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:18.141151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:18.202491  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:18.202530  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:18.232094  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:18.232124  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.762758  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:20.773630  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:20.773708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:20.806503  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:20.806533  346554 cri.go:89] found id: ""
	I1002 07:21:20.806542  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:20.806599  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.810265  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:20.810338  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:20.839055  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:20.839105  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:20.839111  346554 cri.go:89] found id: ""
	I1002 07:21:20.839119  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:20.839176  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.843029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.846663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:20.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:20.875148  346554 cri.go:89] found id: ""
	I1002 07:21:20.875173  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.875183  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:20.875190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:20.875249  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:20.907677  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:20.907701  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:20.907707  346554 cri.go:89] found id: ""
	I1002 07:21:20.907715  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:20.907772  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.911686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.915632  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:20.915707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:20.941873  346554 cri.go:89] found id: ""
	I1002 07:21:20.941899  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.941908  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:20.941915  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:20.941975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:20.973490  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:20.973515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.973521  346554 cri.go:89] found id: ""
	I1002 07:21:20.973530  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:20.973585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.977414  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.981138  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:20.981213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:21.013505  346554 cri.go:89] found id: ""
	I1002 07:21:21.013533  346554 logs.go:282] 0 containers: []
	W1002 07:21:21.013543  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:21.013553  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:21.013565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:21.047930  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:21.047959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:21.144461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:21.144498  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:21.218444  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:21.218469  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:21.218482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:21.244979  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:21.245010  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:21.273907  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:21.273940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:21.304310  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:21.304341  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:21.383311  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:21.383390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:21.418944  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:21.418976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:21.437126  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:21.437154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:21.499338  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:21.499373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:21.541388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:21.541424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.103318  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:24.114524  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:24.114645  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:24.142263  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.142286  346554 cri.go:89] found id: ""
	I1002 07:21:24.142295  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:24.142357  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.146924  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:24.146998  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:24.174920  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.174945  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.174950  346554 cri.go:89] found id: ""
	I1002 07:21:24.174958  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:24.175015  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.179961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.183781  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:24.183859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:24.213946  346554 cri.go:89] found id: ""
	I1002 07:21:24.213969  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.213978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:24.213985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:24.214044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:24.240875  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.240898  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.240903  346554 cri.go:89] found id: ""
	I1002 07:21:24.240910  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:24.240967  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.244817  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.248504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:24.248601  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:24.277554  346554 cri.go:89] found id: ""
	I1002 07:21:24.277579  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.277588  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:24.277595  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:24.277675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:24.308411  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.308507  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.308518  346554 cri.go:89] found id: ""
	I1002 07:21:24.308526  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:24.308585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.312514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.316209  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:24.316322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:24.352013  346554 cri.go:89] found id: ""
	I1002 07:21:24.352037  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.352047  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:24.352057  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:24.352070  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.392888  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:24.392926  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.422136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:24.422162  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:24.522148  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:24.522189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:24.559761  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:24.559789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:24.635577  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:24.635658  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:24.635688  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.664008  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:24.664038  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.716205  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:24.716243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.776422  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:24.776465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.812576  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:24.812606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.850011  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:24.850051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:24.957619  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:24.957658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.474346  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:27.486924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:27.486999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:27.527387  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:27.527411  346554 cri.go:89] found id: ""
	I1002 07:21:27.527419  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:27.527481  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.531347  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:27.531425  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:27.557184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:27.557209  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.557216  346554 cri.go:89] found id: ""
	I1002 07:21:27.557226  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:27.557285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.561185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.564887  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:27.564964  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:27.593958  346554 cri.go:89] found id: ""
	I1002 07:21:27.593984  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.593993  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:27.594000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:27.594070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:27.624297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:27.624321  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:27.624325  346554 cri.go:89] found id: ""
	I1002 07:21:27.624332  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:27.624390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.628548  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.632313  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:27.632401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:27.658827  346554 cri.go:89] found id: ""
	I1002 07:21:27.658850  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.658858  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:27.658876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:27.658942  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:27.687346  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.687422  346554 cri.go:89] found id: ""
	I1002 07:21:27.687438  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:27.687516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.691438  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:27.691563  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:27.716933  346554 cri.go:89] found id: ""
	I1002 07:21:27.716959  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.716969  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:27.716979  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:27.717019  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:27.817783  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:27.817831  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.857490  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:27.857525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.885125  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:27.885157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:27.918095  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:27.918133  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.933988  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:27.934018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:28.004686  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:28.004719  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:28.004734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:28.034260  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:28.034287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:28.093230  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:28.093269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:28.164138  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:28.164177  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:28.195157  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:28.195188  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:30.778568  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:30.789765  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:30.789833  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:30.825174  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:30.825194  346554 cri.go:89] found id: ""
	I1002 07:21:30.825202  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:30.825257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.829729  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:30.829796  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:30.856611  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:30.856632  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:30.856637  346554 cri.go:89] found id: ""
	I1002 07:21:30.856644  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:30.856701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.860561  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.864279  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:30.864353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:30.891192  346554 cri.go:89] found id: ""
	I1002 07:21:30.891217  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.891257  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:30.891269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:30.891353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:30.918873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:30.918892  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:30.918897  346554 cri.go:89] found id: ""
	I1002 07:21:30.918904  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:30.918965  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.922949  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.926830  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:30.926928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:30.953030  346554 cri.go:89] found id: ""
	I1002 07:21:30.953059  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.953068  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:30.953074  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:30.953131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:30.980458  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:30.980480  346554 cri.go:89] found id: ""
	I1002 07:21:30.980489  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:30.980547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.984323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:30.984450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:31.026334  346554 cri.go:89] found id: ""
	I1002 07:21:31.026360  346554 logs.go:282] 0 containers: []
	W1002 07:21:31.026370  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:31.026380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:31.026416  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:31.058391  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:31.058420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:31.116004  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:31.116040  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:31.151060  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:31.151099  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:31.231368  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:31.231406  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:31.332798  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:31.332835  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:31.413678  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:31.413705  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:31.413717  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:31.461265  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:31.461299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:31.534946  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:31.534986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:31.562600  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:31.562629  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:31.592876  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:31.592906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.110078  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:34.121201  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:34.121271  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:34.148533  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.148554  346554 cri.go:89] found id: ""
	I1002 07:21:34.148562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:34.148621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.152503  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:34.152585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:34.181027  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.181050  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.181056  346554 cri.go:89] found id: ""
	I1002 07:21:34.181063  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:34.181117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.185002  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.189485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:34.189560  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:34.215599  346554 cri.go:89] found id: ""
	I1002 07:21:34.215625  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.215634  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:34.215641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:34.215699  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:34.241734  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.241763  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.241768  346554 cri.go:89] found id: ""
	I1002 07:21:34.241776  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:34.241832  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.245545  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.248974  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:34.249050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:34.276023  346554 cri.go:89] found id: ""
	I1002 07:21:34.276049  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.276059  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:34.276072  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:34.276132  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:34.303384  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.303407  346554 cri.go:89] found id: ""
	I1002 07:21:34.303415  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:34.303472  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.307469  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:34.307539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:34.340234  346554 cri.go:89] found id: ""
	I1002 07:21:34.340261  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.340271  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:34.340281  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:34.340293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.356522  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:34.356550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.394796  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:34.394825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.443502  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:34.443538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.474055  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:34.474081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:34.555556  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:34.555637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:34.658066  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:34.658101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:34.733631  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:34.733651  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:34.733665  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.784032  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:34.784068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.847736  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:34.847771  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.875075  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:34.875172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:37.408950  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:37.421164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:37.421273  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:37.452410  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.452439  346554 cri.go:89] found id: ""
	I1002 07:21:37.452449  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:37.452505  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.456325  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:37.456445  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:37.486317  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.486340  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:37.486346  346554 cri.go:89] found id: ""
	I1002 07:21:37.486353  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:37.486451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.490342  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.494027  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:37.494104  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:37.527183  346554 cri.go:89] found id: ""
	I1002 07:21:37.527257  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.527281  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:37.527305  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:37.527403  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:37.553164  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:37.553189  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:37.553194  346554 cri.go:89] found id: ""
	I1002 07:21:37.553202  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:37.553263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.557191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.560812  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:37.560909  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:37.592768  346554 cri.go:89] found id: ""
	I1002 07:21:37.592837  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.592861  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:37.592887  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:37.592973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:37.619244  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:37.619275  346554 cri.go:89] found id: ""
	I1002 07:21:37.619285  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:37.619382  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.622994  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:37.623067  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:37.654796  346554 cri.go:89] found id: ""
	I1002 07:21:37.654833  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.654843  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:37.654853  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:37.654864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:37.735865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:37.735903  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:37.829667  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:37.829705  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:37.906371  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:37.906396  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:37.906409  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.931859  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:37.931891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.982107  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:37.982141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:38.026363  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:38.026402  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:38.097347  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:38.097387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:38.129911  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:38.129940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:38.174203  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:38.174233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:38.192324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:38.192356  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.723244  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:40.733967  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:40.734044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:40.761160  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:40.761180  346554 cri.go:89] found id: ""
	I1002 07:21:40.761196  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:40.761257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.764997  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:40.765082  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:40.793331  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:40.793357  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:40.793376  346554 cri.go:89] found id: ""
	I1002 07:21:40.793385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:40.793441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.799890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.803764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:40.803836  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:40.834660  346554 cri.go:89] found id: ""
	I1002 07:21:40.834686  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.834696  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:40.834702  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:40.834765  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:40.866063  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:40.866087  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:40.866093  346554 cri.go:89] found id: ""
	I1002 07:21:40.866103  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:40.866168  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.870407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.873946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:40.874058  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:40.908301  346554 cri.go:89] found id: ""
	I1002 07:21:40.908367  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.908391  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:40.908417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:40.908494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:40.937896  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.937966  346554 cri.go:89] found id: ""
	I1002 07:21:40.937990  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:40.938080  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.941880  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:40.941952  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:40.967147  346554 cri.go:89] found id: ""
	I1002 07:21:40.967174  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.967190  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:40.967226  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:40.967238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:41.061039  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:41.061077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:41.080254  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:41.080282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:41.108521  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:41.108547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:41.162117  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:41.162154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:41.233238  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:41.233276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:41.260363  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:41.260392  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:41.333767  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:41.333840  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:41.333863  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:41.370518  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:41.370556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:41.399620  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:41.399646  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:41.485257  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:41.485299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.031564  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:44.043423  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:44.043501  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:44.077366  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.077391  346554 cri.go:89] found id: ""
	I1002 07:21:44.077400  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:44.077473  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.082216  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:44.082297  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:44.114495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.114564  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.114585  346554 cri.go:89] found id: ""
	I1002 07:21:44.114612  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:44.114701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.118699  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.122876  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:44.122955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:44.161976  346554 cri.go:89] found id: ""
	I1002 07:21:44.162003  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.162015  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:44.162021  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:44.162120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:44.190658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.190682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.190688  346554 cri.go:89] found id: ""
	I1002 07:21:44.190695  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:44.190800  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.194562  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.198424  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:44.198514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:44.224096  346554 cri.go:89] found id: ""
	I1002 07:21:44.224158  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.224181  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:44.224207  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:44.224284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:44.251545  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.251569  346554 cri.go:89] found id: ""
	I1002 07:21:44.251581  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:44.251639  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.255354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:44.255428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:44.282373  346554 cri.go:89] found id: ""
	I1002 07:21:44.282400  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.282409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:44.282419  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:44.282431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.308028  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:44.308062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.363685  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:44.363723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.396318  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:44.396349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.442337  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:44.442370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:44.546740  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:44.546778  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:44.562701  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:44.562734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:44.638865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:44.638901  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:44.638934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.675050  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:44.675117  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.759066  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:44.759108  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.789536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:44.789569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:47.372747  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:47.384470  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:47.384538  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:47.411456  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.411476  346554 cri.go:89] found id: ""
	I1002 07:21:47.411484  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:47.411538  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.415979  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:47.416052  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:47.441980  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.442000  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.442005  346554 cri.go:89] found id: ""
	I1002 07:21:47.442012  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:47.442071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.446178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.449820  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:47.449889  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:47.480516  346554 cri.go:89] found id: ""
	I1002 07:21:47.480597  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.480614  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:47.480622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:47.480700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:47.512233  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.512299  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.512321  346554 cri.go:89] found id: ""
	I1002 07:21:47.512347  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:47.512447  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.517986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.522484  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:47.522599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:47.554391  346554 cri.go:89] found id: ""
	I1002 07:21:47.554459  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.554483  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:47.554509  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:47.554608  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:47.581519  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.581586  346554 cri.go:89] found id: ""
	I1002 07:21:47.581608  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:47.581710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.585885  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:47.585999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:47.615242  346554 cri.go:89] found id: ""
	I1002 07:21:47.615272  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.615281  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:47.615291  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:47.615322  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:47.635364  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:47.635394  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:47.712651  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:47.712678  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:47.712694  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.743506  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:47.743536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.811148  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:47.811227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.870291  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:47.870324  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.910224  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:47.910257  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.939069  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:47.939155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.964969  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:47.965008  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:48.043117  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:48.043158  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:48.088315  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:48.088344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:50.689757  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:50.700824  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:50.700893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:50.728143  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:50.728166  346554 cri.go:89] found id: ""
	I1002 07:21:50.728175  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:50.728244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.732333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:50.732406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:50.757855  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:50.757880  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:50.757886  346554 cri.go:89] found id: ""
	I1002 07:21:50.757905  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:50.757972  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.762029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.765976  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:50.766050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:50.799256  346554 cri.go:89] found id: ""
	I1002 07:21:50.799278  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.799287  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:50.799293  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:50.799360  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:50.831950  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:50.831974  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:50.831981  346554 cri.go:89] found id: ""
	I1002 07:21:50.831988  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:50.832045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.836319  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.840585  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:50.840668  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:50.870390  346554 cri.go:89] found id: ""
	I1002 07:21:50.870416  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.870428  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:50.870436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:50.870502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:50.900076  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:50.900103  346554 cri.go:89] found id: ""
	I1002 07:21:50.900112  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:50.900193  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.904363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:50.904461  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:50.932728  346554 cri.go:89] found id: ""
	I1002 07:21:50.932755  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.932775  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:50.932786  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:50.932798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:51.001280  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:51.001310  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:51.001326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:51.032692  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:51.032721  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:51.086523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:51.086563  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:51.151924  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:51.151959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:51.181936  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:51.181965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:51.209313  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:51.209340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:51.246072  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:51.246103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:51.328956  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:51.328991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:51.362658  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:51.362692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:51.461576  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:51.461615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:53.981504  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:53.992767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:53.992841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:54.027324  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.027347  346554 cri.go:89] found id: ""
	I1002 07:21:54.027356  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:54.027422  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.031946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:54.032021  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:54.059889  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.059911  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.059916  346554 cri.go:89] found id: ""
	I1002 07:21:54.059924  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:54.059983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.064071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.068437  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:54.068516  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:54.100879  346554 cri.go:89] found id: ""
	I1002 07:21:54.100906  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.100917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:54.100923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:54.101019  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:54.127769  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.127792  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.127798  346554 cri.go:89] found id: ""
	I1002 07:21:54.127806  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:54.127871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.131837  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.135428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:54.135507  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:54.163909  346554 cri.go:89] found id: ""
	I1002 07:21:54.163934  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.163943  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:54.163950  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:54.164008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:54.195746  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.195778  346554 cri.go:89] found id: ""
	I1002 07:21:54.195787  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:54.195846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.200638  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:54.200733  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:54.228414  346554 cri.go:89] found id: ""
	I1002 07:21:54.228492  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.228518  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:54.228534  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:54.228548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:54.261854  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:54.261884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:54.337793  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:54.337814  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:54.337828  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.374142  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:54.374176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.444394  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:54.444430  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.487047  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:54.487074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.531639  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:54.531667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:54.639157  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:54.639196  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:54.655755  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:54.655784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.685950  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:54.685978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.753837  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:54.753879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.341138  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:57.351729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:57.351806  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:57.383937  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.383962  346554 cri.go:89] found id: ""
	I1002 07:21:57.383970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:57.384030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.387697  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:57.387774  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:57.413348  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:57.413372  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.413377  346554 cri.go:89] found id: ""
	I1002 07:21:57.413385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:57.413451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.417397  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.420826  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:57.420904  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:57.453888  346554 cri.go:89] found id: ""
	I1002 07:21:57.453913  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.453922  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:57.453928  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:57.453986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:57.483451  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:57.483472  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:57.483476  346554 cri.go:89] found id: ""
	I1002 07:21:57.483483  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:57.483541  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.487407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.490932  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:57.491034  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:57.526291  346554 cri.go:89] found id: ""
	I1002 07:21:57.526318  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.526327  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:57.526334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:57.526391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:57.554217  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.554297  346554 cri.go:89] found id: ""
	I1002 07:21:57.554320  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:57.554415  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.558417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:57.558494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:57.590610  346554 cri.go:89] found id: ""
	I1002 07:21:57.590632  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.590640  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:57.590649  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:57.590662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:57.686336  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:57.686376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.717511  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:57.717543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.754283  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:57.754326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.785227  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:57.785258  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.869305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:57.869342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:57.909139  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:57.909171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:57.926456  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:57.926487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:57.995639  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:57.995664  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:57.995679  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:58.058207  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:58.058248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:58.125241  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:58.125284  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.654876  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:00.665832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:00.665905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:00.693874  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.693939  346554 cri.go:89] found id: ""
	I1002 07:22:00.693962  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:00.694054  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.697859  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:00.697934  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:00.725245  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.725270  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:00.725276  346554 cri.go:89] found id: ""
	I1002 07:22:00.725284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:00.725364  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.729223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.732817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:00.732935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:00.758839  346554 cri.go:89] found id: ""
	I1002 07:22:00.758906  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.758929  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:00.758953  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:00.759039  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:00.799071  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:00.799149  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.799155  346554 cri.go:89] found id: ""
	I1002 07:22:00.799162  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:00.799234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.803167  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.806750  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:00.806845  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:00.839560  346554 cri.go:89] found id: ""
	I1002 07:22:00.839587  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.839596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:00.839602  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:00.839660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:00.870224  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:00.870255  346554 cri.go:89] found id: ""
	I1002 07:22:00.870263  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:00.870336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.874393  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:00.874495  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:00.912075  346554 cri.go:89] found id: ""
	I1002 07:22:00.912105  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.912114  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:00.912124  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:00.912136  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.937824  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:00.937853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.995416  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:00.995451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:01.066170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:01.066205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:01.097565  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:01.097596  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:01.177599  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:01.177641  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:01.279014  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:01.279051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:01.294984  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:01.295013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:01.367956  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:01.368020  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:01.368050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:01.410820  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:01.410865  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:01.438796  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:01.438821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:03.971937  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:03.983881  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:03.983958  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:04.015026  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.015047  346554 cri.go:89] found id: ""
	I1002 07:22:04.015055  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:04.015146  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.019432  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:04.019511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:04.047606  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.047638  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.047644  346554 cri.go:89] found id: ""
	I1002 07:22:04.047651  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:04.047716  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.052312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.055940  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:04.056013  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:04.084749  346554 cri.go:89] found id: ""
	I1002 07:22:04.084774  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.084784  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:04.084791  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:04.084858  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:04.115693  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.115718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:04.115724  346554 cri.go:89] found id: ""
	I1002 07:22:04.115732  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:04.115791  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.119451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.123387  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:04.123509  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:04.160601  346554 cri.go:89] found id: ""
	I1002 07:22:04.160634  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.160643  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:04.160650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:04.160709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:04.186914  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.186975  346554 cri.go:89] found id: ""
	I1002 07:22:04.187000  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:04.187074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.190897  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:04.190972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:04.217225  346554 cri.go:89] found id: ""
	I1002 07:22:04.217292  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.217306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:04.217320  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:04.217332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:04.248848  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:04.248876  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:04.265771  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:04.265801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:04.331344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:04.331380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:04.331395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.358729  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:04.358757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.416966  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:04.417007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.455261  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:04.455298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.483009  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:04.483037  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:04.563547  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:04.563585  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:04.668263  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:04.668301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.744129  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:04.744172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.275239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:07.285854  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:07.285925  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:07.312977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.312997  346554 cri.go:89] found id: ""
	I1002 07:22:07.313005  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:07.313060  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.316845  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:07.316920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:07.346852  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.346874  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.346879  346554 cri.go:89] found id: ""
	I1002 07:22:07.346887  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:07.346943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.350635  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.354162  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:07.354227  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:07.383691  346554 cri.go:89] found id: ""
	I1002 07:22:07.383716  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.383725  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:07.383732  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:07.383790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:07.412740  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.412762  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.412768  346554 cri.go:89] found id: ""
	I1002 07:22:07.412775  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:07.412874  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.416633  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.420294  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:07.420370  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:07.448452  346554 cri.go:89] found id: ""
	I1002 07:22:07.448481  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.448496  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:07.448503  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:07.448573  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:07.478691  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:07.478759  346554 cri.go:89] found id: ""
	I1002 07:22:07.478782  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:07.478877  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.484491  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:07.484566  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:07.526882  346554 cri.go:89] found id: ""
	I1002 07:22:07.526907  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.526916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:07.526926  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:07.526940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:07.543682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:07.543709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:07.622365  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:07.622386  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:07.622401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.688381  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:07.688417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.716317  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:07.716368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:07.765160  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:07.765187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:07.863442  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:07.863480  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.890947  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:07.890975  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.931413  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:07.931445  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.994034  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:07.994116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:08.029432  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:08.029459  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:10.612654  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:10.624226  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:10.624295  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:10.651797  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:10.651820  346554 cri.go:89] found id: ""
	I1002 07:22:10.651829  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:10.651887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.655778  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:10.655861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:10.682781  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:10.682804  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:10.682810  346554 cri.go:89] found id: ""
	I1002 07:22:10.682817  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:10.682873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.686610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.690176  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:10.690248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:10.716340  346554 cri.go:89] found id: ""
	I1002 07:22:10.716365  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.716374  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:10.716380  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:10.716450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:10.744916  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:10.744941  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:10.744947  346554 cri.go:89] found id: ""
	I1002 07:22:10.744954  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:10.745009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.748825  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.752367  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:10.752459  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:10.778426  346554 cri.go:89] found id: ""
	I1002 07:22:10.778491  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.778519  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:10.778545  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:10.778634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:10.816930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:10.816956  346554 cri.go:89] found id: ""
	I1002 07:22:10.816965  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:10.817021  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.820675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:10.820748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:10.848624  346554 cri.go:89] found id: ""
	I1002 07:22:10.848692  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.848716  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:10.848747  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:10.848784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:10.949146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:10.949183  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:10.966424  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:10.966503  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:11.050571  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:11.050590  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:11.050607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:11.096274  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:11.096305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:11.163795  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:11.163833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:11.198136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:11.198167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:11.281776  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:11.281815  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:11.314298  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:11.314329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:11.346046  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:11.346074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:11.401509  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:11.401546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:13.937437  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:13.948853  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:13.948931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:13.978524  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:13.978546  346554 cri.go:89] found id: ""
	I1002 07:22:13.978562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:13.978622  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:13.983904  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:13.984002  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:14.018404  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.018427  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.018432  346554 cri.go:89] found id: ""
	I1002 07:22:14.018441  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:14.018501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.022898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.027485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:14.027580  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:14.067189  346554 cri.go:89] found id: ""
	I1002 07:22:14.067277  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.067293  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:14.067301  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:14.067380  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:14.098843  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:14.098868  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.098874  346554 cri.go:89] found id: ""
	I1002 07:22:14.098882  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:14.098938  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.103497  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.107744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:14.107820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:14.136768  346554 cri.go:89] found id: ""
	I1002 07:22:14.136797  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.136807  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:14.136813  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:14.136880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:14.163984  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.164055  346554 cri.go:89] found id: ""
	I1002 07:22:14.164079  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:14.164165  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.168259  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:14.168337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:14.201762  346554 cri.go:89] found id: ""
	I1002 07:22:14.201789  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.201799  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:14.201809  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:14.201822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.228036  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:14.228067  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:14.305247  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:14.305286  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:14.417180  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:14.417216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:14.434371  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:14.434404  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.494496  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:14.494534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.530240  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:14.530274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:14.565285  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:14.565312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:14.656059  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:14.656082  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:14.656096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:14.684431  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:14.684465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.720953  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:14.720987  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.291251  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:17.303244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:17.303315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:17.330183  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.330208  346554 cri.go:89] found id: ""
	I1002 07:22:17.330217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:17.330281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.334207  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:17.334281  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:17.363238  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.363263  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.363269  346554 cri.go:89] found id: ""
	I1002 07:22:17.363276  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:17.363331  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.367005  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.370719  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:17.370792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:17.397991  346554 cri.go:89] found id: ""
	I1002 07:22:17.398016  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.398026  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:17.398032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:17.398092  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:17.431537  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.431562  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.431568  346554 cri.go:89] found id: ""
	I1002 07:22:17.431575  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:17.431631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.435774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.439628  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:17.439701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:17.470573  346554 cri.go:89] found id: ""
	I1002 07:22:17.470598  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.470614  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:17.470621  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:17.470689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:17.496787  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:17.496813  346554 cri.go:89] found id: ""
	I1002 07:22:17.496822  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:17.496879  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.500676  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:17.500809  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:17.528111  346554 cri.go:89] found id: ""
	I1002 07:22:17.528136  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.528145  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:17.528155  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:17.528167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:17.629228  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:17.629269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:17.719781  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:17.719804  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:17.719818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.791077  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:17.791176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.835873  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:17.835907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.865669  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:17.865698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:17.947809  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:17.947851  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:17.966021  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:17.966054  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.993388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:17.993419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:18.067826  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:18.067915  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:18.098854  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:18.098928  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:20.640412  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:20.654177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:20.654280  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:20.689110  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:20.689138  346554 cri.go:89] found id: ""
	I1002 07:22:20.689146  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:20.689210  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.692968  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:20.693043  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:20.726246  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.726271  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:20.726276  346554 cri.go:89] found id: ""
	I1002 07:22:20.726284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:20.726340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.730329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.734406  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:20.734503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:20.762306  346554 cri.go:89] found id: ""
	I1002 07:22:20.762332  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.762341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:20.762348  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:20.762406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:20.801345  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:20.801370  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:20.801375  346554 cri.go:89] found id: ""
	I1002 07:22:20.801383  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:20.801461  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.805572  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.809363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:20.809439  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:20.839370  346554 cri.go:89] found id: ""
	I1002 07:22:20.839396  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.839405  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:20.839411  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:20.839487  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:20.866883  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:20.866908  346554 cri.go:89] found id: ""
	I1002 07:22:20.866918  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:20.866994  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.871482  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:20.871602  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:20.915272  346554 cri.go:89] found id: ""
	I1002 07:22:20.915297  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.915306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:20.915334  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:20.915354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.969984  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:20.970023  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:21.008389  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:21.008426  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:21.097527  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:21.097564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:21.131052  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:21.131112  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:21.250056  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:21.250095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:21.266497  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:21.266528  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:21.336488  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:21.336517  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:21.336534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:21.365447  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:21.365477  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:21.432439  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:21.432517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:21.464158  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:21.464186  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:23.993684  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:24.012128  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:24.012344  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:24.041820  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.041844  346554 cri.go:89] found id: ""
	I1002 07:22:24.041853  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:24.041913  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.045939  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:24.046012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:24.080951  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.080971  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.080977  346554 cri.go:89] found id: ""
	I1002 07:22:24.080984  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:24.081042  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.086379  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.090878  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:24.090956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:24.118754  346554 cri.go:89] found id: ""
	I1002 07:22:24.118793  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.118803  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:24.118809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:24.118876  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:24.162937  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.162960  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.162967  346554 cri.go:89] found id: ""
	I1002 07:22:24.162975  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:24.163041  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.167416  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.171521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:24.171612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:24.198740  346554 cri.go:89] found id: ""
	I1002 07:22:24.198764  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.198774  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:24.198780  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:24.198849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:24.226586  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.226607  346554 cri.go:89] found id: ""
	I1002 07:22:24.226616  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:24.226676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.230625  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:24.230701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:24.258053  346554 cri.go:89] found id: ""
	I1002 07:22:24.258089  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.258100  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:24.258110  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:24.258122  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:24.357393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:24.357431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:24.375359  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:24.375390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.444675  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:24.444714  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.484227  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:24.484262  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.512674  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:24.512707  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:24.597691  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:24.597712  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:24.597728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.628466  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:24.628492  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.706367  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:24.706408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.737446  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:24.737475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:24.822997  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:24.823036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:27.355482  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:27.366566  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:27.366636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:27.394804  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:27.394828  346554 cri.go:89] found id: ""
	I1002 07:22:27.394837  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:27.394901  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.398931  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:27.399000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:27.425553  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.425576  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.425582  346554 cri.go:89] found id: ""
	I1002 07:22:27.425590  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:27.425651  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.429400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.433140  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:27.433237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:27.463605  346554 cri.go:89] found id: ""
	I1002 07:22:27.463626  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.463635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:27.463642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:27.463701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:27.493043  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.493074  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.493080  346554 cri.go:89] found id: ""
	I1002 07:22:27.493087  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:27.493145  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.497072  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.500729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:27.500805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:27.531993  346554 cri.go:89] found id: ""
	I1002 07:22:27.532021  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.532031  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:27.532037  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:27.532097  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:27.559232  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.559310  346554 cri.go:89] found id: ""
	I1002 07:22:27.559329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:27.559400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.563624  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:27.563744  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:27.593254  346554 cri.go:89] found id: ""
	I1002 07:22:27.593281  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.593302  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:27.593313  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:27.593328  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.622961  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:27.622992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:27.700292  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:27.700315  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:27.700329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.760790  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:27.760830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.800937  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:27.800976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.879230  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:27.879273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.910457  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:27.910561  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:27.998247  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:27.998287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:28.039823  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:28.039856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:28.148384  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:28.148472  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:28.170086  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:28.170114  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.702644  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:30.713672  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:30.713748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:30.742461  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.742484  346554 cri.go:89] found id: ""
	I1002 07:22:30.742493  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:30.742553  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.746359  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:30.746446  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:30.777229  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:30.777256  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:30.777261  346554 cri.go:89] found id: ""
	I1002 07:22:30.777269  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:30.777345  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.781661  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.785300  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:30.785373  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:30.812435  346554 cri.go:89] found id: ""
	I1002 07:22:30.812465  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.812474  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:30.812481  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:30.812558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:30.839730  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:30.839752  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:30.839758  346554 cri.go:89] found id: ""
	I1002 07:22:30.839765  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:30.839851  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.843582  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.847332  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:30.847414  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:30.877768  346554 cri.go:89] found id: ""
	I1002 07:22:30.877795  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.877804  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:30.877811  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:30.877919  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:30.906930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.906954  346554 cri.go:89] found id: ""
	I1002 07:22:30.906970  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:30.907050  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.911004  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:30.911153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:30.936781  346554 cri.go:89] found id: ""
	I1002 07:22:30.936817  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.936826  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:30.936836  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:30.936849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.963944  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:30.963978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:31.039393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:31.039431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:31.056356  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:31.056396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:31.086443  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:31.086483  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:31.129305  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:31.129342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:31.206518  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:31.206557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:31.246963  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:31.246992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:31.349345  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:31.349380  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:31.424210  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:31.424235  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:31.424247  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:31.494342  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:31.494381  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.028701  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:34.039883  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:34.039955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:34.082124  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.082149  346554 cri.go:89] found id: ""
	I1002 07:22:34.082158  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:34.082222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.086333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:34.086408  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:34.115537  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.115562  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.115568  346554 cri.go:89] found id: ""
	I1002 07:22:34.115575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:34.115632  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.119540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.123109  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:34.123181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:34.149943  346554 cri.go:89] found id: ""
	I1002 07:22:34.149969  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.149978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:34.149985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:34.150098  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:34.177023  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.177044  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.177051  346554 cri.go:89] found id: ""
	I1002 07:22:34.177060  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:34.177117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.180893  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.184341  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:34.184418  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:34.211353  346554 cri.go:89] found id: ""
	I1002 07:22:34.211377  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.211385  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:34.211391  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:34.211449  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:34.237574  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:34.237593  346554 cri.go:89] found id: ""
	I1002 07:22:34.237601  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:34.237659  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.241551  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:34.241626  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:34.272007  346554 cri.go:89] found id: ""
	I1002 07:22:34.272030  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.272039  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:34.272048  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:34.272059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:34.344503  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:34.344540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:34.378151  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:34.378181  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:34.479542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:34.479579  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:34.561912  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:34.561988  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:34.562009  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.627010  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:34.627046  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.675398  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:34.675431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.761258  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:34.761301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:34.783800  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:34.783847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.822817  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:34.822856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.855272  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:34.855298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.390316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:37.401208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:37.401285  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:37.428835  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:37.428857  346554 cri.go:89] found id: ""
	I1002 07:22:37.428864  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:37.428934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.433201  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:37.433276  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:37.461633  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.461664  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.461670  346554 cri.go:89] found id: ""
	I1002 07:22:37.461678  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:37.461736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.465629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.469272  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:37.469348  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:37.498524  346554 cri.go:89] found id: ""
	I1002 07:22:37.498551  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.498561  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:37.498567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:37.498627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:37.535431  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:37.535453  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.535458  346554 cri.go:89] found id: ""
	I1002 07:22:37.535465  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:37.535523  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.539518  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.543351  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:37.543429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:37.569817  346554 cri.go:89] found id: ""
	I1002 07:22:37.569886  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.569912  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:37.569938  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:37.570048  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:37.600094  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.600161  346554 cri.go:89] found id: ""
	I1002 07:22:37.600184  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:37.600279  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.604474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:37.604627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:37.635043  346554 cri.go:89] found id: ""
	I1002 07:22:37.635139  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.635164  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:37.635209  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:37.635241  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:37.652712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:37.652747  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:37.724304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:37.724327  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:37.724343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.778979  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:37.779018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.823368  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:37.823400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.852458  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:37.852487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:37.935415  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:37.935451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:38.032660  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:38.032698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:38.062211  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:38.062292  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:38.141041  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:38.141076  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:38.167504  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:38.167535  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:40.716529  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:40.727155  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:40.727237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:40.759650  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:40.759670  346554 cri.go:89] found id: ""
	I1002 07:22:40.759677  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:40.759739  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.763794  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:40.763891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:40.799428  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:40.799495  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:40.799505  346554 cri.go:89] found id: ""
	I1002 07:22:40.799513  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:40.799587  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.804441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.808181  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:40.808256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:40.839434  346554 cri.go:89] found id: ""
	I1002 07:22:40.839458  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.839466  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:40.839479  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:40.839540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:40.866347  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:40.866368  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:40.866373  346554 cri.go:89] found id: ""
	I1002 07:22:40.866380  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:40.866435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.870243  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.873802  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:40.873887  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:40.915472  346554 cri.go:89] found id: ""
	I1002 07:22:40.915499  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.915508  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:40.915515  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:40.915589  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:40.945530  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:40.945552  346554 cri.go:89] found id: ""
	I1002 07:22:40.945570  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:40.945629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.949410  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:40.949513  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:40.976546  346554 cri.go:89] found id: ""
	I1002 07:22:40.976589  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.976598  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:40.976608  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:40.976620  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:40.993923  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:40.993952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:41.069718  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:41.069746  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:41.069760  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:41.101275  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:41.101313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:41.185486  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:41.185522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:41.213391  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:41.213419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:41.286933  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:41.286973  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:41.325032  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:41.325063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:41.427475  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:41.427517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:41.507722  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:41.507762  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:41.553697  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:41.553731  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.083713  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:44.094946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:44.095050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:44.122939  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.122961  346554 cri.go:89] found id: ""
	I1002 07:22:44.122970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:44.123027  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.126926  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:44.127001  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:44.168228  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.168253  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.168259  346554 cri.go:89] found id: ""
	I1002 07:22:44.168267  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:44.168325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.172203  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.176051  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:44.176154  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:44.207518  346554 cri.go:89] found id: ""
	I1002 07:22:44.207545  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.207554  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:44.207560  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:44.207619  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:44.236177  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:44.236200  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.236206  346554 cri.go:89] found id: ""
	I1002 07:22:44.236214  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:44.236274  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.239868  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.243456  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:44.243575  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:44.269491  346554 cri.go:89] found id: ""
	I1002 07:22:44.269568  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.269596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:44.269612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:44.269687  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:44.295403  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.295423  346554 cri.go:89] found id: ""
	I1002 07:22:44.295431  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:44.295490  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.299440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:44.299555  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:44.333034  346554 cri.go:89] found id: ""
	I1002 07:22:44.333110  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.333136  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:44.333175  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:44.333210  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.364108  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:44.364139  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:44.433101  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:44.433123  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:44.433137  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.489676  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:44.489711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.535780  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:44.535819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.563832  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:44.563862  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:44.644267  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:44.644308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:44.678038  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:44.678077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:44.779429  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:44.779467  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:44.802305  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:44.802335  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.828371  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:44.828400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.412789  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:47.423373  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:47.423464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:47.451136  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.451162  346554 cri.go:89] found id: ""
	I1002 07:22:47.451171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:47.451237  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.455412  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:47.455531  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:47.487387  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:47.487418  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:47.487424  346554 cri.go:89] found id: ""
	I1002 07:22:47.487432  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:47.487491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.491360  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.495265  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:47.495336  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:47.534120  346554 cri.go:89] found id: ""
	I1002 07:22:47.534144  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.534153  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:47.534159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:47.534223  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:47.567581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.567604  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:47.567610  346554 cri.go:89] found id: ""
	I1002 07:22:47.567618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:47.567676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.571558  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.575428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:47.575500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:47.604017  346554 cri.go:89] found id: ""
	I1002 07:22:47.604041  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.604050  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:47.604057  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:47.604178  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:47.631246  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.631266  346554 cri.go:89] found id: ""
	I1002 07:22:47.631275  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:47.631336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.635224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:47.635329  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:47.662879  346554 cri.go:89] found id: ""
	I1002 07:22:47.662906  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.662916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:47.662925  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:47.662969  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:47.758850  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:47.758889  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.787003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:47.787035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.865561  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:47.865598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.894009  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:47.894083  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:47.911472  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:47.911547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:47.992995  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:47.993061  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:47.993095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:48.054795  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:48.054833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:48.105647  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:48.105681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:48.136822  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:48.136852  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:48.221826  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:48.221868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:50.759146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:50.770232  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:50.770304  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:50.808978  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:50.808999  346554 cri.go:89] found id: ""
	I1002 07:22:50.809014  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:50.809071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.812891  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:50.812973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:50.844548  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:50.844621  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:50.844634  346554 cri.go:89] found id: ""
	I1002 07:22:50.844643  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:50.844704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.848854  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.853318  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:50.853395  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:50.879864  346554 cri.go:89] found id: ""
	I1002 07:22:50.879885  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.879894  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:50.879901  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:50.879978  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:50.913482  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:50.913502  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:50.913506  346554 cri.go:89] found id: ""
	I1002 07:22:50.913514  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:50.913571  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.917411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.920913  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:50.920995  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:50.953742  346554 cri.go:89] found id: ""
	I1002 07:22:50.953769  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.953778  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:50.953785  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:50.953849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:50.982216  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:50.982239  346554 cri.go:89] found id: ""
	I1002 07:22:50.982247  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:50.982312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.985960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:50.986036  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:51.023369  346554 cri.go:89] found id: ""
	I1002 07:22:51.023407  346554 logs.go:282] 0 containers: []
	W1002 07:22:51.023416  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:51.023425  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:51.023437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:51.124423  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:51.124471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:51.162362  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:51.162466  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:51.193077  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:51.193120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:51.209317  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:51.209348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:51.286706  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:51.286736  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:51.286768  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:51.314928  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:51.315005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:51.375178  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:51.375216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:51.450324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:51.450368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:51.478495  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:51.478526  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:51.563131  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:51.563178  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.112345  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:54.123567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:54.123643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:54.154215  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.154239  346554 cri.go:89] found id: ""
	I1002 07:22:54.154247  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:54.154306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.158242  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:54.158319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:54.192307  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.192332  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.192343  346554 cri.go:89] found id: ""
	I1002 07:22:54.192351  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:54.192419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.197194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.201582  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:54.201705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:54.228380  346554 cri.go:89] found id: ""
	I1002 07:22:54.228415  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.228425  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:54.228432  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:54.228525  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:54.256056  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.256080  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.256087  346554 cri.go:89] found id: ""
	I1002 07:22:54.256094  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:54.256155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.260143  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.263934  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:54.264008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:54.290214  346554 cri.go:89] found id: ""
	I1002 07:22:54.290241  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.290251  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:54.290256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:54.290314  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:54.319063  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.319117  346554 cri.go:89] found id: ""
	I1002 07:22:54.319126  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:54.319184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.323448  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:54.323547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:54.354341  346554 cri.go:89] found id: ""
	I1002 07:22:54.354366  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.354374  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:54.354384  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:54.354396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.409595  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:54.409633  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.449908  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:54.449944  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.532130  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:54.532170  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.559794  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:54.559822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.593620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:54.593651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:54.700915  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:54.700951  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.727426  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:54.727452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.756226  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:54.756263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:54.841269  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:54.841312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:54.859387  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:54.859425  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:54.940701  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.441672  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:57.453569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:57.453639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:57.483699  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:57.483722  346554 cri.go:89] found id: ""
	I1002 07:22:57.483746  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:57.483845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.487681  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:57.487775  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:57.518495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:57.518520  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:57.518526  346554 cri.go:89] found id: ""
	I1002 07:22:57.518534  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:57.518593  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.522615  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.526448  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:57.526523  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:57.553219  346554 cri.go:89] found id: ""
	I1002 07:22:57.553246  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.553255  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:57.553263  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:57.553327  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:57.582109  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.582132  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.582137  346554 cri.go:89] found id: ""
	I1002 07:22:57.582146  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:57.582209  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.586222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.590675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:57.590752  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:57.621475  346554 cri.go:89] found id: ""
	I1002 07:22:57.621544  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.621567  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:57.621592  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:57.621680  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:57.647238  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:57.647304  346554 cri.go:89] found id: ""
	I1002 07:22:57.647329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:57.647425  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.651299  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:57.651391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:57.681221  346554 cri.go:89] found id: ""
	I1002 07:22:57.681298  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.681324  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:57.681350  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:57.681387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.757042  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:57.757079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.789483  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:57.789519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:57.876258  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:57.876301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:57.909957  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:57.909986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:57.994768  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.994790  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:57.994804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:58.057805  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:58.057845  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:58.093196  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:58.093227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:58.192017  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:58.192055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:58.209558  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:58.209587  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:58.236404  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:58.236433  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.781745  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:00.796477  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:00.796552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:00.823241  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:00.823265  346554 cri.go:89] found id: ""
	I1002 07:23:00.823273  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:00.823327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.827586  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:00.827675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:00.862251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:00.862274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.862280  346554 cri.go:89] found id: ""
	I1002 07:23:00.862287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:00.862348  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.866453  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.870120  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:00.870189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:00.910250  346554 cri.go:89] found id: ""
	I1002 07:23:00.910318  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.910341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:00.910366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:00.910451  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:00.939142  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:00.939208  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:00.939234  346554 cri.go:89] found id: ""
	I1002 07:23:00.939243  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:00.939300  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.943281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.947110  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:00.947180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:00.979402  346554 cri.go:89] found id: ""
	I1002 07:23:00.979431  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.979444  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:00.979452  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:00.979518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:01.016038  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.016103  346554 cri.go:89] found id: ""
	I1002 07:23:01.016131  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:01.016225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:01.020366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:01.020520  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:01.049712  346554 cri.go:89] found id: ""
	I1002 07:23:01.049780  346554 logs.go:282] 0 containers: []
	W1002 07:23:01.049803  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:01.049831  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:01.049870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:01.101253  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:01.101287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:01.200014  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:01.200053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:01.277860  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:01.277885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:01.277898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:01.341507  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:01.341545  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:01.413278  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:01.413313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:01.446875  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:01.446914  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.475436  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:01.475464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:01.551813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:01.551853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:01.585150  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:01.585187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:01.601574  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:01.601606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.131042  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:04.142520  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:04.142634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:04.176669  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.176692  346554 cri.go:89] found id: ""
	I1002 07:23:04.176701  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:04.176763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.180972  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:04.181051  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:04.208821  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.208846  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.208851  346554 cri.go:89] found id: ""
	I1002 07:23:04.208859  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:04.208925  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.213191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.217006  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:04.217129  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:04.245751  346554 cri.go:89] found id: ""
	I1002 07:23:04.245775  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.245790  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:04.245798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:04.245859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:04.284664  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.284685  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.284689  346554 cri.go:89] found id: ""
	I1002 07:23:04.284697  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:04.284756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.288986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.292617  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:04.292700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:04.320145  346554 cri.go:89] found id: ""
	I1002 07:23:04.320171  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.320180  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:04.320187  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:04.320245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:04.347600  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.347622  346554 cri.go:89] found id: ""
	I1002 07:23:04.347631  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:04.347686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.351440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:04.351511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:04.383653  346554 cri.go:89] found id: ""
	I1002 07:23:04.383732  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.383749  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:04.383759  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:04.383775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.440177  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:04.440218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.468956  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:04.469027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:04.545741  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:04.545780  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:04.579865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:04.579895  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:04.681656  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:04.681695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:04.752352  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:04.752373  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:04.752387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.793420  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:04.793493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.864258  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:04.864293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.893921  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:04.894006  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:04.911663  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:04.911693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.444239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:07.455140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:07.455218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:07.484101  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.484124  346554 cri.go:89] found id: ""
	I1002 07:23:07.484133  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:07.484189  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.488067  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:07.488145  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:07.522958  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.523021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.523044  346554 cri.go:89] found id: ""
	I1002 07:23:07.523071  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:07.523194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.527249  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.531022  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:07.531124  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:07.557498  346554 cri.go:89] found id: ""
	I1002 07:23:07.557519  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.557528  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:07.557535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:07.557609  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:07.584061  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.584092  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.584096  346554 cri.go:89] found id: ""
	I1002 07:23:07.584105  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:07.584170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.587957  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.591564  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:07.591639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:07.619944  346554 cri.go:89] found id: ""
	I1002 07:23:07.619971  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.619980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:07.619987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:07.620050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:07.648834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:07.648855  346554 cri.go:89] found id: ""
	I1002 07:23:07.648863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:07.648919  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.652819  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:07.652937  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:07.682396  346554 cri.go:89] found id: ""
	I1002 07:23:07.682421  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.682430  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:07.682439  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:07.682452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:07.751625  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:07.751650  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:07.751667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.778524  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:07.778551  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.850872  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:07.850910  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.887246  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:07.887283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.959701  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:07.959738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.989632  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:07.989661  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:08.009848  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:08.009885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:08.041024  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:08.041052  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:08.120762  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:08.120798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:08.174204  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:08.174234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:10.791227  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:10.804748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:10.804834  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:10.833209  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:10.833256  346554 cri.go:89] found id: ""
	I1002 07:23:10.833264  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:10.833327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.837233  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:10.837307  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:10.867407  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:10.867431  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:10.867436  346554 cri.go:89] found id: ""
	I1002 07:23:10.867444  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:10.867501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.871289  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.874962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:10.875041  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:10.909346  346554 cri.go:89] found id: ""
	I1002 07:23:10.909372  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.909381  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:10.909388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:10.909444  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:10.944052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:10.944127  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:10.944152  346554 cri.go:89] found id: ""
	I1002 07:23:10.944181  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:10.944285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.952530  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.957003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:10.957085  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:10.984253  346554 cri.go:89] found id: ""
	I1002 07:23:10.984287  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.984297  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:10.984321  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:10.984401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:11.018350  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.018417  346554 cri.go:89] found id: ""
	I1002 07:23:11.018442  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:11.018520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:11.022612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:11.022707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:11.054294  346554 cri.go:89] found id: ""
	I1002 07:23:11.054371  346554 logs.go:282] 0 containers: []
	W1002 07:23:11.054394  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:11.054437  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:11.054471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:11.132821  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:11.132846  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:11.132859  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:11.161373  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:11.161401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:11.219899  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:11.219936  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.250524  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:11.250554  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:11.282533  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:11.282564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:11.385870  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:11.385909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:11.402968  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:11.402997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:11.447948  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:11.447983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:11.521218  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:11.521256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:11.551246  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:11.551320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.129146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:14.140212  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:14.140315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:14.167561  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:14.167585  346554 cri.go:89] found id: ""
	I1002 07:23:14.167593  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:14.167691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.171728  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:14.171841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:14.198571  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.198594  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.198600  346554 cri.go:89] found id: ""
	I1002 07:23:14.198607  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:14.198693  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.202658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.207962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:14.208057  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:14.233944  346554 cri.go:89] found id: ""
	I1002 07:23:14.233970  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.233979  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:14.233986  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:14.234064  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:14.264854  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.264878  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.264884  346554 cri.go:89] found id: ""
	I1002 07:23:14.264892  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:14.264948  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.268797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.272677  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:14.272756  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:14.304992  346554 cri.go:89] found id: ""
	I1002 07:23:14.305031  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.305041  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:14.305047  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:14.305120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:14.335500  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.335570  346554 cri.go:89] found id: ""
	I1002 07:23:14.335593  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:14.335684  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.339428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:14.339502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:14.366928  346554 cri.go:89] found id: ""
	I1002 07:23:14.366954  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.366964  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:14.366973  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:14.366984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.441765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:14.441808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.473510  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:14.473541  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.552162  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:14.552201  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:14.586130  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:14.586160  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:14.602135  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:14.602164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.638523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:14.638557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.717772  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:14.717808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.748211  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:14.748283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:14.848964  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:14.849003  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:14.926254  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:14.926277  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:14.926290  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.456912  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:17.467889  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:17.467979  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:17.495434  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.495457  346554 cri.go:89] found id: ""
	I1002 07:23:17.495466  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:17.495524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.499591  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:17.499663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:17.535737  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.535757  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:17.535761  346554 cri.go:89] found id: ""
	I1002 07:23:17.535768  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:17.535826  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.540069  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.543817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:17.543891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:17.573877  346554 cri.go:89] found id: ""
	I1002 07:23:17.573907  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.573917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:17.573923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:17.573989  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:17.609297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.609320  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:17.609326  346554 cri.go:89] found id: ""
	I1002 07:23:17.609333  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:17.609390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.613640  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.617183  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:17.617253  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:17.647944  346554 cri.go:89] found id: ""
	I1002 07:23:17.647971  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.647980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:17.647987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:17.648045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:17.674528  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:17.674552  346554 cri.go:89] found id: ""
	I1002 07:23:17.674561  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:17.674617  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.678979  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:17.679143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:17.706803  346554 cri.go:89] found id: ""
	I1002 07:23:17.706828  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.706837  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:17.706846  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:17.706857  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:17.801171  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:17.801207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:17.817922  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:17.817952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.889064  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:17.889103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.971481  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:17.971518  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:18.051668  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:18.051712  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:18.090695  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:18.090723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:18.162304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:18.162328  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:18.162343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:18.194200  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:18.194233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:18.231522  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:18.231557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:18.263215  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:18.263246  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:20.795234  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:20.807871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:20.807939  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:20.839049  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:20.839070  346554 cri.go:89] found id: ""
	I1002 07:23:20.839098  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:20.839172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.842946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:20.843023  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:20.873446  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:20.873469  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:20.873475  346554 cri.go:89] found id: ""
	I1002 07:23:20.873484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:20.873540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.877435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.881337  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:20.881415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:20.918940  346554 cri.go:89] found id: ""
	I1002 07:23:20.918971  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.918980  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:20.918987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:20.919046  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:20.951052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:20.951075  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:20.951112  346554 cri.go:89] found id: ""
	I1002 07:23:20.951120  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:20.951185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.955805  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.959649  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:20.959737  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:20.987685  346554 cri.go:89] found id: ""
	I1002 07:23:20.987710  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.987719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:20.987726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:20.987792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:21.028577  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.028602  346554 cri.go:89] found id: ""
	I1002 07:23:21.028622  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:21.028683  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:21.032899  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:21.032977  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:21.062654  346554 cri.go:89] found id: ""
	I1002 07:23:21.062679  346554 logs.go:282] 0 containers: []
	W1002 07:23:21.062688  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:21.062698  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:21.062710  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:21.091027  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:21.091059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:21.159267  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:21.159307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:21.231814  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:21.231856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:21.263174  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:21.263205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:21.310161  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:21.310194  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:21.349961  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:21.349997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.379224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:21.379306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:21.454682  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:21.454722  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:21.560920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:21.560960  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:21.578179  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:21.578211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:21.668218  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.169201  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:24.181390  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:24.181463  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:24.213873  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:24.213896  346554 cri.go:89] found id: ""
	I1002 07:23:24.213905  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:24.213963  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.217730  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:24.217807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:24.252439  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.252471  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.252476  346554 cri.go:89] found id: ""
	I1002 07:23:24.252484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:24.252567  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.256307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.260273  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:24.260349  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:24.287826  346554 cri.go:89] found id: ""
	I1002 07:23:24.287852  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.287862  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:24.287870  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:24.287973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:24.315859  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.315884  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.315890  346554 cri.go:89] found id: ""
	I1002 07:23:24.315897  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:24.315975  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.319993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.323777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:24.323877  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:24.354601  346554 cri.go:89] found id: ""
	I1002 07:23:24.354631  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.354642  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:24.354648  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:24.354730  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:24.384370  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.384395  346554 cri.go:89] found id: ""
	I1002 07:23:24.384403  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:24.384488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.388615  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:24.388695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:24.415488  346554 cri.go:89] found id: ""
	I1002 07:23:24.415514  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.415523  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:24.415533  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:24.415546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.458158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:24.458192  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.534624  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:24.534667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.567982  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:24.568016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.596275  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:24.596306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:24.674293  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:24.674334  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:24.777997  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:24.778039  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:24.801006  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:24.801036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.862265  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:24.862303  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:24.913721  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:24.913755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:24.991414  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.991443  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:24.991458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.525665  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:27.536783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:27.536869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:27.563440  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.563507  346554 cri.go:89] found id: ""
	I1002 07:23:27.563531  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:27.563623  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.568154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:27.568278  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:27.597184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:27.597205  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:27.597211  346554 cri.go:89] found id: ""
	I1002 07:23:27.597230  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:27.597306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.601073  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.604808  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:27.604880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:27.635124  346554 cri.go:89] found id: ""
	I1002 07:23:27.635147  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.635155  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:27.635161  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:27.635220  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:27.662383  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:27.662455  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:27.662474  346554 cri.go:89] found id: ""
	I1002 07:23:27.662500  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:27.662607  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.666537  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.670164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:27.670238  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:27.697001  346554 cri.go:89] found id: ""
	I1002 07:23:27.697028  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.697037  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:27.697044  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:27.697127  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:27.722638  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:27.722662  346554 cri.go:89] found id: ""
	I1002 07:23:27.722672  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:27.722728  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.726512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:27.726591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:27.755270  346554 cri.go:89] found id: ""
	I1002 07:23:27.755300  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.755309  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:27.755319  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:27.755330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:27.854338  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:27.854379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:27.928550  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:27.928577  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:27.928590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.960015  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:27.960047  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:28.025647  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:28.025706  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:28.064089  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:28.064125  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:28.158385  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:28.158423  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:28.196505  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:28.196533  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:28.215893  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:28.215921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:28.246774  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:28.246821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:28.274010  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:28.274036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:30.852724  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:30.863588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:30.863660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:30.891349  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:30.891371  346554 cri.go:89] found id: ""
	I1002 07:23:30.891380  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:30.891457  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.895249  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:30.895343  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:30.922333  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:30.922356  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:30.922361  346554 cri.go:89] found id: ""
	I1002 07:23:30.922368  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:30.922423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.926269  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.929885  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:30.929957  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:30.956216  346554 cri.go:89] found id: ""
	I1002 07:23:30.956253  346554 logs.go:282] 0 containers: []
	W1002 07:23:30.956269  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:30.956285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:30.956347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:30.984076  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:30.984101  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:30.984107  346554 cri.go:89] found id: ""
	I1002 07:23:30.984121  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:30.984182  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.988082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.991650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:30.991741  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:31.028148  346554 cri.go:89] found id: ""
	I1002 07:23:31.028174  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.028184  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:31.028190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:31.028274  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:31.057090  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.057116  346554 cri.go:89] found id: ""
	I1002 07:23:31.057125  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:31.057195  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:31.064614  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:31.064695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:31.096928  346554 cri.go:89] found id: ""
	I1002 07:23:31.096996  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.097022  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:31.097042  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:31.097069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:31.155662  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:31.155701  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:31.202926  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:31.202958  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:31.236483  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:31.236508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:31.341179  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:31.341216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:31.368996  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:31.369022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:31.449499  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:31.449539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.476326  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:31.476354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:31.561871  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:31.561909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:31.597214  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:31.597243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:31.614646  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:31.614674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:31.686141  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.187051  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:34.198084  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:34.198163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:34.225977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.226000  346554 cri.go:89] found id: ""
	I1002 07:23:34.226009  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:34.226094  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.230977  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:34.231053  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:34.258817  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.258840  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:34.258845  346554 cri.go:89] found id: ""
	I1002 07:23:34.258853  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:34.258908  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.262894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.266671  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:34.266772  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:34.296183  346554 cri.go:89] found id: ""
	I1002 07:23:34.296207  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.296217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:34.296223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:34.296283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:34.329604  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.329678  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.329698  346554 cri.go:89] found id: ""
	I1002 07:23:34.329722  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:34.329830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.333641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.337102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:34.337170  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:34.365600  346554 cri.go:89] found id: ""
	I1002 07:23:34.365626  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.365636  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:34.365645  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:34.365708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:34.393323  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.393347  346554 cri.go:89] found id: ""
	I1002 07:23:34.393357  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:34.393439  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.397338  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:34.397411  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:34.423876  346554 cri.go:89] found id: ""
	I1002 07:23:34.423899  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.423908  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:34.423918  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:34.423934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.453221  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:34.453251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.481067  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:34.481095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:34.558614  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:34.558651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:34.601917  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:34.601948  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:34.705602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:34.705637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:34.769442  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.769466  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:34.769478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.808589  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:34.808615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.869982  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:34.870024  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.959694  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:34.959739  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:34.976284  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:34.976319  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.518488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:37.530159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:37.530242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:37.557004  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:37.557026  346554 cri.go:89] found id: ""
	I1002 07:23:37.557035  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:37.557091  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.560903  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:37.560976  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:37.593556  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.593580  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.593586  346554 cri.go:89] found id: ""
	I1002 07:23:37.593594  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:37.593652  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.597692  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.601598  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:37.601672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:37.628723  346554 cri.go:89] found id: ""
	I1002 07:23:37.628751  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.628761  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:37.628767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:37.628832  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:37.656989  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:37.657010  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:37.657014  346554 cri.go:89] found id: ""
	I1002 07:23:37.657022  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:37.657090  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.660940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.664730  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:37.664810  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:37.690545  346554 cri.go:89] found id: ""
	I1002 07:23:37.690567  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.690575  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:37.690582  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:37.690638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:37.718139  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:37.718164  346554 cri.go:89] found id: ""
	I1002 07:23:37.718173  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:37.718239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.722013  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:37.722130  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:37.748320  346554 cri.go:89] found id: ""
	I1002 07:23:37.748387  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.748410  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:37.748439  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:37.748478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:37.848896  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:37.848937  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:37.935000  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:37.935035  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:37.935050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.998904  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:37.998949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:38.039239  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:38.039274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:38.133839  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:38.133878  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:38.164590  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:38.164617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:38.247363  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:38.247401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:38.263025  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:38.263053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:38.292185  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:38.292215  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:38.324631  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:38.324662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:40.856053  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:40.866969  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:40.867037  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:40.908779  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:40.908802  346554 cri.go:89] found id: ""
	I1002 07:23:40.908811  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:40.908882  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.912652  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:40.912724  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:40.938681  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:40.938711  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:40.938717  346554 cri.go:89] found id: ""
	I1002 07:23:40.938725  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:40.938780  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.942512  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.945790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:40.945860  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:40.973961  346554 cri.go:89] found id: ""
	I1002 07:23:40.974043  346554 logs.go:282] 0 containers: []
	W1002 07:23:40.974067  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:40.974093  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:40.974208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:41.001128  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.001152  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.001158  346554 cri.go:89] found id: ""
	I1002 07:23:41.001165  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:41.001239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.007592  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.012525  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:41.012642  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:41.044447  346554 cri.go:89] found id: ""
	I1002 07:23:41.044521  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.044545  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:41.044571  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:41.044654  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:41.083149  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.083216  346554 cri.go:89] found id: ""
	I1002 07:23:41.083250  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:41.083338  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.087534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:41.087663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:41.118406  346554 cri.go:89] found id: ""
	I1002 07:23:41.118470  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.118494  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:41.118528  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:41.118559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.195975  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:41.196011  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.227140  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:41.227172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:41.313141  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:41.313180  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:41.416180  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:41.416218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:41.459495  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:41.459536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.488753  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:41.488785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:41.532527  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:41.532560  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:41.548856  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:41.548885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:41.618600  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:41.618624  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:41.618638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:41.646628  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:41.646656  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.221221  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:44.231877  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:44.231950  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:44.257682  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.257714  346554 cri.go:89] found id: ""
	I1002 07:23:44.257724  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:44.257781  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.261470  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:44.261568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:44.291709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.291732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.291738  346554 cri.go:89] found id: ""
	I1002 07:23:44.291749  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:44.291806  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.295774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.299744  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:44.299891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:44.326325  346554 cri.go:89] found id: ""
	I1002 07:23:44.326361  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.326372  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:44.326396  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:44.326476  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:44.353658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.353682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.353687  346554 cri.go:89] found id: ""
	I1002 07:23:44.353694  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:44.353752  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.357660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.361374  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:44.361448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:44.390237  346554 cri.go:89] found id: ""
	I1002 07:23:44.390271  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.390281  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:44.390287  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:44.390356  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:44.421420  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.421444  346554 cri.go:89] found id: ""
	I1002 07:23:44.421453  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:44.421520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.425406  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:44.425480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:44.453498  346554 cri.go:89] found id: ""
	I1002 07:23:44.453575  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.453599  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:44.453627  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:44.453663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:44.469406  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:44.469489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:44.537881  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:44.537947  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:44.537976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.566669  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:44.566750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.626234  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:44.626311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.663981  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:44.664015  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.743176  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:44.743211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.769609  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:44.769637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:44.850618  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:44.850654  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:44.956047  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:44.956089  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.988388  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:44.988421  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:47.617924  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:47.629050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:47.629142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:47.657724  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:47.657747  346554 cri.go:89] found id: ""
	I1002 07:23:47.657756  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:47.657814  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.661805  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:47.661878  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:47.691884  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:47.691906  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:47.691911  346554 cri.go:89] found id: ""
	I1002 07:23:47.691919  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:47.691978  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.695983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.699611  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:47.699685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:47.731628  346554 cri.go:89] found id: ""
	I1002 07:23:47.731654  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.731664  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:47.731671  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:47.731732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:47.760694  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:47.760718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:47.760723  346554 cri.go:89] found id: ""
	I1002 07:23:47.760731  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:47.760830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.764776  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.768282  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:47.768363  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:47.800941  346554 cri.go:89] found id: ""
	I1002 07:23:47.800967  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.800976  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:47.800982  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:47.801049  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:47.828847  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.828870  346554 cri.go:89] found id: ""
	I1002 07:23:47.828879  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:47.828955  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.832777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:47.832850  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:47.861095  346554 cri.go:89] found id: ""
	I1002 07:23:47.861122  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.861131  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:47.861141  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:47.861184  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.893617  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:47.893649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:47.990939  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:47.990977  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:48.007073  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:48.007153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:48.043757  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:48.043786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:48.136713  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:48.136750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:48.168119  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:48.168151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:48.251880  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:48.251919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:48.285530  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:48.285566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:48.357500  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:48.357522  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:48.357537  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:48.403215  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:48.403293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.006650  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:51.028354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:51.028471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:51.057229  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.057253  346554 cri.go:89] found id: ""
	I1002 07:23:51.057262  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:51.057329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.061731  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:51.061807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:51.089750  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.089772  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.089778  346554 cri.go:89] found id: ""
	I1002 07:23:51.089785  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:51.089848  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.094055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.097989  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:51.098090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:51.125460  346554 cri.go:89] found id: ""
	I1002 07:23:51.125487  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.125510  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:51.125536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:51.125611  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:51.155658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.155684  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.155689  346554 cri.go:89] found id: ""
	I1002 07:23:51.155698  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:51.155757  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.159937  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.164562  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:51.164639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:51.194590  346554 cri.go:89] found id: ""
	I1002 07:23:51.194626  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.194635  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:51.194642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:51.194720  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:51.230400  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.230424  346554 cri.go:89] found id: ""
	I1002 07:23:51.230433  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:51.230501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.235241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:51.235335  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:51.264526  346554 cri.go:89] found id: ""
	I1002 07:23:51.264551  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.264562  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:51.264573  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:51.264603  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.292045  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:51.292128  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.377066  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:51.377104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.408242  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:51.408273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.437071  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:51.437100  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:51.508699  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:51.508723  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:51.508736  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.594052  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:51.594094  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.631968  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:51.632002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:51.710908  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:51.710950  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:51.751275  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:51.751309  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:51.859428  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:51.859510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.376917  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:54.388247  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:54.388322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:54.417539  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.417563  346554 cri.go:89] found id: ""
	I1002 07:23:54.417571  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:54.417634  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.421536  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:54.421612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:54.452318  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.452342  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:54.452347  346554 cri.go:89] found id: ""
	I1002 07:23:54.452355  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:54.452410  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.457434  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.460992  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:54.461070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:54.494010  346554 cri.go:89] found id: ""
	I1002 07:23:54.494031  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.494040  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:54.494045  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:54.494107  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:54.528280  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:54.528300  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.528305  346554 cri.go:89] found id: ""
	I1002 07:23:54.528312  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:54.528369  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.532283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.535876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:54.535946  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:54.564214  346554 cri.go:89] found id: ""
	I1002 07:23:54.564240  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.564250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:54.564256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:54.564347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:54.594060  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:54.594084  346554 cri.go:89] found id: ""
	I1002 07:23:54.594093  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:54.594169  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.598344  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:54.598442  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:54.632402  346554 cri.go:89] found id: ""
	I1002 07:23:54.632426  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.632435  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:54.632445  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:54.632500  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:54.729477  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:54.729517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:54.800743  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:54.800815  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:54.800846  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.861032  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:54.861069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.889171  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:54.889244  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:54.925585  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:54.925615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.941174  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:54.941202  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.969205  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:54.969235  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:55.020047  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:55.020087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:55.098725  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:55.098805  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:55.132210  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:55.132239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:57.716428  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:57.730713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:57.730787  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:57.757853  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:57.757878  346554 cri.go:89] found id: ""
	I1002 07:23:57.757887  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:57.757943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.761971  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:57.762045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:57.790866  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:57.790891  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.790897  346554 cri.go:89] found id: ""
	I1002 07:23:57.790904  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:57.790962  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.795621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.799575  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:57.799653  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:57.830281  346554 cri.go:89] found id: ""
	I1002 07:23:57.830307  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.830317  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:57.830323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:57.830382  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:57.858397  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:57.858420  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:57.858425  346554 cri.go:89] found id: ""
	I1002 07:23:57.858433  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:57.858488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.862244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.865851  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:57.865951  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:57.893160  346554 cri.go:89] found id: ""
	I1002 07:23:57.893234  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.893250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:57.893258  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:57.893318  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:57.920413  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:57.920499  346554 cri.go:89] found id: ""
	I1002 07:23:57.920516  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:57.920585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.924327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:57.924423  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:57.951174  346554 cri.go:89] found id: ""
	I1002 07:23:57.951197  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.951206  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:57.951216  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:57.951268  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.986550  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:57.986632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:58.017224  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:58.017260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:58.122339  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:58.122377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:58.138465  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:58.138494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:58.168292  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:58.168317  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:58.230852  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:58.230890  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:58.328715  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:58.328764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:58.357761  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:58.357792  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:58.444436  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:58.444482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:58.478280  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:58.478306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:58.560395  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.061663  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:01.077726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:01.077804  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:01.106834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:01.106860  346554 cri.go:89] found id: ""
	I1002 07:24:01.106869  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:01.106940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.110940  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:01.111014  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:01.139370  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.139392  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.139397  346554 cri.go:89] found id: ""
	I1002 07:24:01.139404  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:01.139466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.143857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.148114  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:01.148207  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:01.178376  346554 cri.go:89] found id: ""
	I1002 07:24:01.178468  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.178493  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:01.178522  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:01.178635  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:01.208075  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.208098  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.208103  346554 cri.go:89] found id: ""
	I1002 07:24:01.208111  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:01.208178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.212014  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.216098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:01.216233  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:01.245384  346554 cri.go:89] found id: ""
	I1002 07:24:01.245424  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.245434  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:01.245440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:01.245503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:01.282247  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.282322  346554 cri.go:89] found id: ""
	I1002 07:24:01.282346  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:01.282443  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.288826  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:01.288905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:01.319901  346554 cri.go:89] found id: ""
	I1002 07:24:01.319926  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.319934  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:01.319943  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:01.319956  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.389606  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:01.389692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.444021  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:01.444055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.526762  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:01.526804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.559019  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:01.559049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:01.634782  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:01.634818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:01.709026  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.709100  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:01.709120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.738970  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:01.739000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:01.770329  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:01.770364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:01.884154  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:01.884232  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:01.902364  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:01.902390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.435943  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:04.447669  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:04.447785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:04.478942  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.478965  346554 cri.go:89] found id: ""
	I1002 07:24:04.478974  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:04.479030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.483417  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:04.483511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:04.518294  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:04.518320  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.518325  346554 cri.go:89] found id: ""
	I1002 07:24:04.518334  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:04.518388  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.522223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.526427  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:04.526558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:04.558950  346554 cri.go:89] found id: ""
	I1002 07:24:04.558987  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.558996  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:04.559003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:04.559153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:04.586620  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:04.586645  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:04.586650  346554 cri.go:89] found id: ""
	I1002 07:24:04.586658  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:04.586737  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.590676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.594540  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:04.594644  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:04.621686  346554 cri.go:89] found id: ""
	I1002 07:24:04.621709  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.621719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:04.621725  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:04.621781  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:04.649834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:04.649855  346554 cri.go:89] found id: ""
	I1002 07:24:04.649863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:04.649944  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.654335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:04.654436  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:04.687143  346554 cri.go:89] found id: ""
	I1002 07:24:04.687166  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.687175  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:04.687184  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:04.687216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.715298  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:04.715329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.758402  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:04.758436  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:04.838751  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:04.838789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:04.870372  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:04.870403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:04.984168  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:04.984207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:04.999826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:04.999858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:05.088672  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:05.088696  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:05.088709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:05.150024  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:05.150063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:05.226780  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:05.226819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:05.255567  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:05.255605  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.791197  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:07.803594  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:07.803689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:07.833077  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:07.833103  346554 cri.go:89] found id: ""
	I1002 07:24:07.833113  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:07.833214  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.837537  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:07.837661  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:07.866899  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:07.866926  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:07.866932  346554 cri.go:89] found id: ""
	I1002 07:24:07.866939  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:07.867000  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.870759  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.874593  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:07.874713  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:07.903524  346554 cri.go:89] found id: ""
	I1002 07:24:07.903587  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.903620  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:07.903644  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:07.903738  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:07.934472  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:07.934547  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:07.934567  346554 cri.go:89] found id: ""
	I1002 07:24:07.934593  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:07.934688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.938660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.942349  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:07.942453  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:07.969924  346554 cri.go:89] found id: ""
	I1002 07:24:07.969947  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.969956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:07.969964  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:07.970022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:07.998801  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.998826  346554 cri.go:89] found id: ""
	I1002 07:24:07.998834  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:07.998890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:08.006051  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:08.006218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:08.043683  346554 cri.go:89] found id: ""
	I1002 07:24:08.043712  346554 logs.go:282] 0 containers: []
	W1002 07:24:08.043723  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:08.043733  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:08.043746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:08.094506  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:08.094546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:08.175873  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:08.175912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:08.208161  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:08.208191  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:08.234954  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:08.234983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:08.301287  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:08.301325  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:08.377087  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:08.377123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:08.405378  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:08.405407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:08.431355  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:08.431386  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:08.536433  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:08.536479  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:08.553542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:08.553575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:08.621305  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.122975  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:11.135150  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:11.135231  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:11.168608  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.168633  346554 cri.go:89] found id: ""
	I1002 07:24:11.168642  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:11.168704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.172810  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:11.172893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:11.204325  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.204401  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.204413  346554 cri.go:89] found id: ""
	I1002 07:24:11.204422  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:11.204491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.208514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.212208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:11.212287  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:11.245698  346554 cri.go:89] found id: ""
	I1002 07:24:11.245725  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.245736  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:11.245743  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:11.245805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:11.274196  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.274219  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:11.274224  346554 cri.go:89] found id: ""
	I1002 07:24:11.274231  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:11.274292  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.278411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.282735  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:11.282813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:11.322108  346554 cri.go:89] found id: ""
	I1002 07:24:11.322129  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.322138  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:11.322144  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:11.322203  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:11.350582  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.350647  346554 cri.go:89] found id: ""
	I1002 07:24:11.350659  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:11.350715  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.354559  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:11.354628  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:11.386834  346554 cri.go:89] found id: ""
	I1002 07:24:11.386899  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.386923  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:11.386951  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:11.386981  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:11.465595  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:11.465632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.541894  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:11.541933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.619365  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:11.619408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.647305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:11.647336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:11.686923  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:11.686952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:11.792344  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:11.792440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:11.814593  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:11.814623  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:11.895211  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.895236  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:11.895250  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.921556  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:11.921586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.957833  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:11.957872  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.490490  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:14.502377  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:14.502482  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:14.534162  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.534185  346554 cri.go:89] found id: ""
	I1002 07:24:14.534205  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:14.534262  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.538631  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:14.538701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:14.568427  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.568450  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.568456  346554 cri.go:89] found id: ""
	I1002 07:24:14.568463  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:14.568527  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.572917  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.576683  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:14.576760  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:14.604778  346554 cri.go:89] found id: ""
	I1002 07:24:14.604809  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.604819  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:14.604825  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:14.604932  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:14.631788  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:14.631812  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.631817  346554 cri.go:89] found id: ""
	I1002 07:24:14.631824  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:14.631887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.635951  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.639653  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:14.639769  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:14.682797  346554 cri.go:89] found id: ""
	I1002 07:24:14.682823  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.682832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:14.682839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:14.682899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:14.722146  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:14.722175  346554 cri.go:89] found id: ""
	I1002 07:24:14.722183  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:14.722239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.727035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:14.727164  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:14.759413  346554 cri.go:89] found id: ""
	I1002 07:24:14.759438  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.759447  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:14.759458  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:14.759470  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.786929  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:14.787000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.853005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:14.853042  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.899040  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:14.899071  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:15.004708  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:15.004742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:15.123051  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:15.123106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:15.154325  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:15.154357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:15.183161  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:15.183248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:15.265975  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:15.266013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:15.299575  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:15.299607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:15.315427  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:15.315454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:15.394115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:17.895569  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:17.909876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:17.909985  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:17.941059  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:17.941083  346554 cri.go:89] found id: ""
	I1002 07:24:17.941092  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:17.941159  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.945318  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:17.945401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:17.973722  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:17.973743  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:17.973747  346554 cri.go:89] found id: ""
	I1002 07:24:17.973755  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:17.973813  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.978340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.983135  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:17.983214  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:18.024398  346554 cri.go:89] found id: ""
	I1002 07:24:18.024424  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.024433  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:18.024440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:18.024518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:18.053513  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.053535  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.053540  346554 cri.go:89] found id: ""
	I1002 07:24:18.053548  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:18.053631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.057706  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.061744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:18.061820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:18.093847  346554 cri.go:89] found id: ""
	I1002 07:24:18.093873  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.093884  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:18.093891  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:18.093956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:18.123256  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.123283  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.123289  346554 cri.go:89] found id: ""
	I1002 07:24:18.123296  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:18.123355  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.127263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.131206  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:18.131284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:18.157688  346554 cri.go:89] found id: ""
	I1002 07:24:18.157714  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.157724  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:18.157733  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:18.157745  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:18.203920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:18.203946  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:18.220036  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:18.220064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:18.288859  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:18.288885  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:18.288898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:18.326029  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:18.326064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.410880  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:18.410919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:18.516955  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:18.516994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:18.548753  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:18.548786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:18.613812  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:18.613849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.643416  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:18.643444  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.670170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:18.670199  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.699194  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:18.699231  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.274356  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:21.285713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:21.285785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:21.312389  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.312413  346554 cri.go:89] found id: ""
	I1002 07:24:21.312427  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:21.312492  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.316212  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:21.316290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:21.341368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:21.341390  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.341396  346554 cri.go:89] found id: ""
	I1002 07:24:21.341403  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:21.341458  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.345157  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.348764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:21.348841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:21.381263  346554 cri.go:89] found id: ""
	I1002 07:24:21.381292  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.381302  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:21.381308  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:21.381366  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:21.412001  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:21.412022  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.412027  346554 cri.go:89] found id: ""
	I1002 07:24:21.412035  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:21.412092  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.415991  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.419745  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:21.419818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:21.448790  346554 cri.go:89] found id: ""
	I1002 07:24:21.448817  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.448826  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:21.448832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:21.448894  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:21.476863  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.476885  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.476890  346554 cri.go:89] found id: ""
	I1002 07:24:21.476897  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:21.476995  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.481180  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.484939  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:21.485015  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:21.518979  346554 cri.go:89] found id: ""
	I1002 07:24:21.519005  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.519014  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:21.519023  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:21.519035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.548837  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:21.548868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.577649  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:21.577678  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.614505  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:21.614538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.648602  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:21.648630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.730478  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:21.730515  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:21.770385  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:21.770420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:21.869953  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:21.869990  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:21.890825  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:21.890864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:21.963492  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:21.963514  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:21.963531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.990531  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:21.990559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:22.069923  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:22.070005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.652448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:24.663850  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:24.663928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:24.691270  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:24.691349  346554 cri.go:89] found id: ""
	I1002 07:24:24.691385  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:24.691483  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.695776  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:24.695846  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:24.722540  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:24.722563  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:24.722568  346554 cri.go:89] found id: ""
	I1002 07:24:24.722575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:24.722641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.726529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.730111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:24.730184  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:24.760973  346554 cri.go:89] found id: ""
	I1002 07:24:24.760999  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.761009  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:24.761015  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:24.761096  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:24.788682  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.788702  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:24.788707  346554 cri.go:89] found id: ""
	I1002 07:24:24.788714  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:24.788771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.795284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.800831  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:24.800927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:24.826399  346554 cri.go:89] found id: ""
	I1002 07:24:24.826434  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.826443  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:24.826464  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:24.826550  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:24.854301  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:24.854328  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:24.854334  346554 cri.go:89] found id: ""
	I1002 07:24:24.854341  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:24.854423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.858547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.862285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:24.862407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:24.892024  346554 cri.go:89] found id: ""
	I1002 07:24:24.892048  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.892057  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:24.892067  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:24.892079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:24.993633  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:24.993672  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:25.023967  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:25.023999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:25.088069  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:25.088104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:25.171716  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:25.171754  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:25.211296  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:25.211330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:25.277865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:25.277888  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:25.277901  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:25.305336  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:25.305363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:25.339149  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:25.339311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:25.419370  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:25.419407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:25.452415  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:25.452447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:25.482792  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:25.482824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:28.019833  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:28.031976  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:28.032047  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:28.061518  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.061538  346554 cri.go:89] found id: ""
	I1002 07:24:28.061547  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:28.061610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.065737  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:28.065812  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:28.100250  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.100274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.100280  346554 cri.go:89] found id: ""
	I1002 07:24:28.100287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:28.100347  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.104729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.109130  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:28.109242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:28.136194  346554 cri.go:89] found id: ""
	I1002 07:24:28.136220  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.136229  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:28.136235  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:28.136294  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:28.177728  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.177751  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.177756  346554 cri.go:89] found id: ""
	I1002 07:24:28.177764  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:28.177822  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.182057  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.185909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:28.185984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:28.213081  346554 cri.go:89] found id: ""
	I1002 07:24:28.213104  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.213114  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:28.213120  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:28.213180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:28.242037  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.242061  346554 cri.go:89] found id: ""
	I1002 07:24:28.242070  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:28.242125  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.245909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:28.245982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:28.272643  346554 cri.go:89] found id: ""
	I1002 07:24:28.272688  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.272698  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:28.272708  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:28.272741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:28.368590  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:28.368674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:28.441922  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:28.441993  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:28.442025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.485137  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:28.485174  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.519916  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:28.519949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.547334  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:28.547364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:28.578668  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:28.578698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:28.597024  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:28.597053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.625533  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:28.625562  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.703945  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:28.703983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.782221  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:28.782256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.363217  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:31.375576  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:31.375651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:31.412392  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.412416  346554 cri.go:89] found id: ""
	I1002 07:24:31.412425  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:31.412489  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.416397  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:31.416497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:31.447142  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:31.447172  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.447178  346554 cri.go:89] found id: ""
	I1002 07:24:31.447186  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:31.447245  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.451130  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.454872  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:31.454972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:31.491372  346554 cri.go:89] found id: ""
	I1002 07:24:31.491393  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.491401  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:31.491407  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:31.491464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:31.523581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:31.523606  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.523611  346554 cri.go:89] found id: ""
	I1002 07:24:31.523618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:31.523696  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.527714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.531521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:31.531638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:31.557016  346554 cri.go:89] found id: ""
	I1002 07:24:31.557090  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.557110  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:31.557117  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:31.557180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:31.587792  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:31.587815  346554 cri.go:89] found id: ""
	I1002 07:24:31.587824  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:31.587900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.591474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:31.591544  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:31.621938  346554 cri.go:89] found id: ""
	I1002 07:24:31.622002  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.622025  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:31.622057  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:31.622087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.699830  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:31.699940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:31.731270  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:31.731297  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:31.830036  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:31.830073  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:31.849448  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:31.849489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.887973  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:31.888002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.925845  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:31.925879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.955314  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:31.955344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:32.027448  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:32.027527  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:32.027556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:32.097086  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:32.097123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:32.181841  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:32.181877  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:34.710633  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:34.725897  346554 out.go:203] 
	W1002 07:24:34.728826  346554 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1002 07:24:34.728867  346554 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1002 07:24:34.728877  346554 out.go:285] * Related issues:
	W1002 07:24:34.728892  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1002 07:24:34.728908  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1002 07:24:34.732168  346554 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:19:49 ha-550225 crio[619]: time="2025-10-02T07:19:49.845674437Z" level=info msg="Started container" PID=1394 containerID=3269c04f5498e2befbc42b6cf2cdbe83a291623d3fde767dc07389c7422afd48 description=kube-system/coredns-66bc5c9577-s6dq8/coredns id=566bb378-7524-4452-b1e6-a25280ba5d7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e055873f04c2899609f0c3b597c607526b01fd136aa0e5f79f2676a446255f13
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.208804519Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215218136Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215264529Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215287667Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.22352303Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.223562538Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.223586029Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.23080621Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.230844857Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.230864434Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.236373132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.236409153Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:20:15 ha-550225 conmon[1183]: conmon 48fccb25ba33b3850afc <ninfo>: container 1186 exited with status 1
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.461105809Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5008df2b-58c5-42b1-a1f6-e14a10f1abbb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.46213329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b8ddfc43-aba7-4f99-b91d-97240f3eaf35 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.46331964Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=55bd6811-47fe-4715-9579-6244ca41dc93 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.463596057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.472956017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.47327584Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6958a022ca5d2e537c24f18da644191de8f0c379072dbf05004476abea1680e8/merged/etc/passwd: no such file or directory"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.473326269Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6958a022ca5d2e537c24f18da644191de8f0c379072dbf05004476abea1680e8/merged/etc/group: no such file or directory"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.473692689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.493904849Z" level=info msg="Created container 5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71: kube-system/storage-provisioner/storage-provisioner" id=55bd6811-47fe-4715-9579-6244ca41dc93 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.495150407Z" level=info msg="Starting container: 5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71" id=b45832b0-a0c9-4ad1-8a10-5fba7e2ccb21 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.499183546Z" level=info msg="Started container" PID=1457 containerID=5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71 description=kube-system/storage-provisioner/storage-provisioner id=b45832b0-a0c9-4ad1-8a10-5fba7e2ccb21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc2b31ede15861c2d07fce3991053334dcdd31f17b14021784ac1be8ed7e0b31
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5b2624a029b4c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       2                   bc2b31ede1586       storage-provisioner                 kube-system
	3269c04f5498e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   e055873f04c28       coredns-66bc5c9577-s6dq8            kube-system
	448d4967d9024       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   4 minutes ago       Running             busybox                   1                   e934129b46d08       busybox-7b57f96db7-gph4b            default
	8a9ee715e4343       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               1                   edd2550dab874       kindnet-v7wnc                       kube-system
	5051222f30f0a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 minutes ago       Running             kube-proxy                1                   3e269f3dd585c       kube-proxy-skqs2                    kube-system
	48fccb25ba33b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       1                   bc2b31ede1586       storage-provisioner                 kube-system
	97a0ea46cf7f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   70fe4e27581bb       coredns-66bc5c9577-7gnh8            kube-system
	0dcd791f01f43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   11                  19a2185d4a1eb       kube-controller-manager-ha-550225   kube-system
	8290015e8c15e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   5 minutes ago       Running             kube-apiserver            10                  b2181fe55e225       kube-apiserver-ha-550225            kube-system
	29394f92b6a36       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   10                  19a2185d4a1eb       kube-controller-manager-ha-550225   kube-system
	5b0c0535da780       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            9                   b2181fe55e225       kube-apiserver-ha-550225            kube-system
	5f7223d3b4009       27aa99ef07bb63db109cae7189f6029203a1ba86e8d201ca72eb836e3cdd0b43   7 minutes ago       Running             kube-vip                  1                   c455a5f1f2468       kube-vip-ha-550225                  kube-system
	43f493b22d959       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      3                   8c156781bf4ef       etcd-ha-550225                      kube-system
	2b4cd729501f6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            2                   b0329f645e59c       kube-scheduler-ha-550225            kube-system
	
	
	==> coredns [3269c04f5498e2befbc42b6cf2cdbe83a291623d3fde767dc07389c7422afd48] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50597 - 50866 "HINFO IN 2471821353559588233.5453610813505731232. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027203243s
	
	
	==> coredns [97a0ea46cf7f751b62a77918089760dd2e292198c9c2fc951fc282e4636ba492] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56369 - 30635 "HINFO IN 7137530019898463004.8479900960678889237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.018878387s
	[INFO] 127.0.0.1:38056 - 50955 "HINFO IN 7137530019898463004.8479900960678889237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041678969s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-550225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_03_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:02:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:24:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:03:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-550225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 804fc56d691a47babcd58cd3553282d3
	  System UUID:                96b9796d-f076-4bf0-ac0e-2eccc9d5873e
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gph4b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-66bc5c9577-7gnh8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 coredns-66bc5c9577-s6dq8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 etcd-ha-550225                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-v7wnc                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-550225             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-550225    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-skqs2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-550225             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-550225                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 21m                    kube-proxy       
	  Normal   Starting                 4m52s                  kube-proxy       
	  Normal   Starting                 21m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 21m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)      kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     21m (x8 over 21m)      kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)      kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasNoDiskPressure    21m                    kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                    kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21m                    kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   Starting                 21m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 21m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           21m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   RegisteredNode           21m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   NodeReady                20m                    kubelet          Node ha-550225 status is now: NodeReady
	  Normal   RegisteredNode           19m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   Starting                 7m47s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m47s (x8 over 7m47s)  kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m47s (x8 over 7m47s)  kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m47s (x8 over 7m47s)  kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m38s                  node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	
	
	Name:               ha-550225-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_03_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:03:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:21 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-550225-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 08dcc5805aac4edbab34bc4710db5eef
	  System UUID:                c6a05e31-956b-4e2f-af6e-62090982b7b4
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wbl7l                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-550225-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         20m
	  kube-system                 kindnet-n6kwf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-550225-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-550225-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-jkkmq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-550225-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-550225-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   RegisteredNode           20m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           20m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           19m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           5m38s              node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   NodeNotReady             4m48s              node-controller  Node ha-550225-m02 status is now: NodeNotReady
	
	
	Name:               ha-550225-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_04_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:04:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-550225-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 315218fdc78646b99ded6becf46edf67
	  System UUID:                4ea95856-3488-4a4f-b299-e71342dd8d89
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q95k5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-550225-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-2w4k5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-550225-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-550225-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-2k945                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-550225-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-550225-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        19m    kube-proxy       
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  16m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  5m38s  node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  NodeNotReady    4m48s  node-controller  Node ha-550225-m03 status is now: NodeNotReady
	
	
	Name:               ha-550225-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_06_15_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:06:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-550225-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 4bfee30c7b434881a054adc06b7ffd73
	  System UUID:                9c87cedb-25ad-496a-a907-0c95201b1fe7
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2h5qc       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-proxy-gf52r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeHasSufficientMemory  18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-550225-m04 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  RegisteredNode           5m38s              node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeNotReady             4m48s              node-controller  Node ha-550225-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 06:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 06:49] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:03] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:16] overlayfs: idmapped layers are currently not supported
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [43f493b22d959eb4018498d0af4c8a03328857db3567f13cb0ffaee9ec06c00b] <==
	{"level":"warn","ts":"2025-10-02T07:24:38.191700Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.198939Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.211514Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.220247Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.230629Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.237654Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.240632Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.259414Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.279503Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.279863Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.310176Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.315491Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.319265Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.322931Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.332407Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.340954Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.345428Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.349409Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.353074Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.362071Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.371633Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.372810Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.373668Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.379579Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:38.435526Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 07:24:38 up  2:07,  0 user,  load average: 1.36, 0.99, 1.14
	Linux ha-550225 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a9ee715e43431e349cf8c9be623f1a296d01184f3204e6a4a0f8394fc70358e] <==
	I1002 07:24:08.213350       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:18.212188       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:18.212287       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:18.212500       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:18.212609       1 main.go:301] handling current node
	I1002 07:24:18.212650       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:18.212683       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:18.213031       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:18.215444       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:28.207379       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:28.207511       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:28.207747       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:28.207827       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:28.207968       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:28.208017       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:28.208188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:28.208240       1 main.go:301] handling current node
	I1002 07:24:38.211259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:38.211291       1 main.go:301] handling current node
	I1002 07:24:38.211307       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:38.211313       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:38.211454       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:38.211461       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:38.211513       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:38.211519       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5b0c0535da7807f278c4629073d71180fc43a369ddae7136c7ffd515a7e95c6b] <==
	I1002 07:18:00.892979       1 server.go:150] Version: v1.34.1
	I1002 07:18:00.893076       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1002 07:18:02.015138       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1002 07:18:02.015252       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1002 07:18:02.015284       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1002 07:18:02.015315       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1002 07:18:02.015348       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1002 07:18:02.015382       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1002 07:18:02.015415       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1002 07:18:02.015448       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1002 07:18:02.015481       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1002 07:18:02.015512       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1002 07:18:02.015544       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1002 07:18:02.015575       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1002 07:18:02.033014       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:18:02.034577       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1002 07:18:02.035335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1002 07:18:02.045748       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:18:02.056978       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1002 07:18:02.057010       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1002 07:18:02.057337       1 instance.go:239] Using reconciler: lease
	W1002 07:18:02.058416       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:18:22.032470       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:18:22.034569       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1002 07:18:22.058050       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8290015e8c15e01397448ee79ef46f66d0ddd62579c46b3fd334baf073a9d6bc] <==
	I1002 07:18:54.901508       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 07:18:54.914584       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 07:18:54.914862       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:18:54.917776       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:18:54.920456       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 07:18:54.921448       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:18:54.921690       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 07:18:54.935006       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:18:54.935120       1 policy_source.go:240] refreshing policies
	I1002 07:18:54.936177       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:18:54.995047       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:18:54.995073       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:18:55.006144       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 07:18:55.006401       1 aggregator.go:171] initial CRD sync complete...
	I1002 07:18:55.006443       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:18:55.006472       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:18:55.006502       1 cache.go:39] Caches are synced for autoregister controller
	I1002 07:18:55.693729       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:18:55.915859       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 07:18:56.852268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 07:18:56.854341       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:18:56.866097       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:19:00.445840       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:19:00.449414       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:19:00.588914       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0dcd791f01f43325da7d666b2308b7e9e8afd6c81f0dce7b635d6b6e5e8a9df1] <==
	I1002 07:19:00.416685       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:19:00.422763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:19:00.422858       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 07:19:00.422891       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 07:19:00.429174       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 07:19:00.430239       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 07:19:00.434548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:19:00.434793       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 07:19:00.434939       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:19:00.434988       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:19:00.435000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:19:00.435011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:19:00.435027       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:19:00.436974       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:19:00.437153       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:19:00.437213       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:19:00.437246       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:19:00.437276       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:19:00.440308       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:19:00.441271       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 07:19:00.447203       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:19:00.447327       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:19:00.447774       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-550225-m04"
	I1002 07:19:50.432665       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-550225-m04"
	I1002 07:19:50.870389       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	
	
	==> kube-controller-manager [29394f92b6a368bb1845ecb24b6cebce9a3e6e6816e60bf240997292037f264a] <==
	I1002 07:18:16.059120       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:18:17.185952       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 07:18:17.185981       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:18:17.187402       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 07:18:17.187586       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 07:18:17.187839       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 07:18:17.187927       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:18:33.066017       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [5051222f30f0ae589e47ad3f24adc858d48fe99da320fc5495aa8189ecc36596] <==
	I1002 07:19:45.951789       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:19:46.028809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:19:46.129896       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:19:46.129933       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:19:46.130000       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:19:46.150308       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:19:46.150378       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:19:46.154018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:19:46.154343       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:19:46.154416       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:19:46.157478       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:19:46.157553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:19:46.157874       1 config.go:200] "Starting service config controller"
	I1002 07:19:46.157918       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:19:46.158250       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:19:46.158295       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:19:46.158742       1 config.go:309] "Starting node config controller"
	I1002 07:19:46.158794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:19:46.158824       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:19:46.258046       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:19:46.258051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:19:46.258406       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2b4cd729501f68e709fb29b74cdf4d89db019e465f669755a276bbd13dfa365d] <==
	E1002 07:17:57.915557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:17:59.343245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:18:17.475604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:18:19.476430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:18:20.523426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:18:20.961075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:18:21.209835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:18:22.175039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:18:23.065717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33332->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:18:23.065828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33338->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:18:23.065904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33346->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:18:23.066085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33356->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:18:23.066195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48896->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:18:23.066285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33302->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:18:23.066377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33316->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:18:23.066451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33400->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:18:23.067303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33366->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:18:23.067355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48888->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:18:23.067419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48872->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:18:23.067516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48892->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:18:23.067591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33382->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:18:50.334725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:18:54.767637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:18:54.767804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 07:18:55.890008       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:19:21 ha-550225 kubelet[753]: E1002 07:19:21.811346     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(f74a25ae-35bd-44b0-84a9-50a5df5dec1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:21 ha-550225 kubelet[753]: E1002 07:19:21.811400     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="f74a25ae-35bd-44b0-84a9-50a5df5dec1d"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.810797     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-gph4b_default(193a390b-ce6f-4e39-afcc-7ee671deb0a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.810843     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-gph4b" podUID="193a390b-ce6f-4e39-afcc-7ee671deb0a1"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.811359     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-s6dq8_kube-system(7626557b-e8fe-419b-b447-994cfa9b0f07): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.811895     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-s6dq8" podUID="7626557b-e8fe-419b-b447-994cfa9b0f07"
	Oct 02 07:19:23 ha-550225 kubelet[753]: E1002 07:19:23.811789     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-v7wnc_kube-system(b011ceef-f3c8-4142-8385-b09113581770): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:23 ha-550225 kubelet[753]: E1002 07:19:23.811826     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-v7wnc" podUID="b011ceef-f3c8-4142-8385-b09113581770"
	Oct 02 07:19:24 ha-550225 kubelet[753]: E1002 07:19:24.810191     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-7gnh8_kube-system(55461d93-6678-4e2e-8b48-7d26628c1cf9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:24 ha-550225 kubelet[753]: E1002 07:19:24.810240     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-7gnh8" podUID="55461d93-6678-4e2e-8b48-7d26628c1cf9"
	Oct 02 07:19:31 ha-550225 kubelet[753]: E1002 07:19:31.812684     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-skqs2_kube-system(d5f2a06e-009a-4c94-aee4-c6d515d1a38b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:31 ha-550225 kubelet[753]: E1002 07:19:31.812750     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-skqs2" podUID="d5f2a06e-009a-4c94-aee4-c6d515d1a38b"
	Oct 02 07:19:32 ha-550225 kubelet[753]: E1002 07:19:32.810908     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(f74a25ae-35bd-44b0-84a9-50a5df5dec1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:32 ha-550225 kubelet[753]: E1002 07:19:32.811030     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="f74a25ae-35bd-44b0-84a9-50a5df5dec1d"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812380     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-s6dq8_kube-system(7626557b-e8fe-419b-b447-994cfa9b0f07): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812427     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-s6dq8" podUID="7626557b-e8fe-419b-b447-994cfa9b0f07"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812402     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-gph4b_default(193a390b-ce6f-4e39-afcc-7ee671deb0a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812917     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-v7wnc_kube-system(b011ceef-f3c8-4142-8385-b09113581770): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.814141     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-v7wnc" podUID="b011ceef-f3c8-4142-8385-b09113581770"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.814168     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-gph4b" podUID="193a390b-ce6f-4e39-afcc-7ee671deb0a1"
	Oct 02 07:19:51 ha-550225 kubelet[753]: E1002 07:19:51.724599     753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d\": container with ID starting with 15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d not found: ID does not exist" containerID="15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d"
	Oct 02 07:19:51 ha-550225 kubelet[753]: I1002 07:19:51.724702     753 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d" err="rpc error: code = NotFound desc = could not find container \"15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d\": container with ID starting with 15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d not found: ID does not exist"
	Oct 02 07:19:51 ha-550225 kubelet[753]: E1002 07:19:51.725359     753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04\": container with ID starting with c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04 not found: ID does not exist" containerID="c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04"
	Oct 02 07:19:51 ha-550225 kubelet[753]: I1002 07:19:51.725398     753 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04" err="rpc error: code = NotFound desc = could not find container \"c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04\": container with ID starting with c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04 not found: ID does not exist"
	Oct 02 07:20:16 ha-550225 kubelet[753]: I1002 07:20:16.460466     753 scope.go:117] "RemoveContainer" containerID="48fccb25ba33b3850afc1ffdf5ca13f71673b1d992497dbcadf93bdbc8bdee4c"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225
helpers_test.go:269: (dbg) Run:  kubectl --context ha-550225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (477.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (5.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-550225" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-550225\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-550225\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-550225\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvid
ia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizat
ions\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-550225
helpers_test.go:243: (dbg) docker inspect ha-550225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	        "Created": "2025-10-02T07:02:30.539981852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:16:43.830280649Z",
	            "FinishedAt": "2025-10-02T07:16:42.559270036Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c-json.log",
	        "Name": "/ha-550225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-550225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-550225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	                "LowerDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-550225",
	                "Source": "/var/lib/docker/volumes/ha-550225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-550225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-550225",
	                "name.minikube.sigs.k8s.io": "ha-550225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afa0a4e6ee5917c0a800a9abfad94a173555b01d2438c9506474ee7c27ad6564",
	            "SandboxKey": "/var/run/docker/netns/afa0a4e6ee59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-550225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:f4:60:b8:9c:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a294cab4b5d50d5f227902c62678f378fbede9275f1d54f0b3de7a1f36e1a0",
	                    "EndpointID": "e0227cbf31cf607a461ab665f3bdb5d5d554f27df511a468e38aecbd366c38c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-550225",
	                        "1c1f8ec53310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 logs -n 25: (2.247542s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt ha-550225-m04:/home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp testdata/cp-test.txt ha-550225-m04:/home/docker/cp-test.txt                                                             │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m04.txt │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m04_ha-550225.txt                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225.txt                                                 │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node start m02 --alsologtostderr -v 5                                                                                      │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:08 UTC │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │ 02 Oct 25 07:08 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5                                                                                   │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ node    │ ha-550225 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │ 02 Oct 25 07:16 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:16:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:16:43.556654  346554 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:16:43.556900  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.556935  346554 out.go:374] Setting ErrFile to fd 2...
	I1002 07:16:43.556957  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.557253  346554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:16:43.557663  346554 out.go:368] Setting JSON to false
	I1002 07:16:43.558546  346554 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7155,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:16:43.558645  346554 start.go:140] virtualization:  
	I1002 07:16:43.562097  346554 out.go:179] * [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:16:43.565995  346554 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:16:43.566065  346554 notify.go:220] Checking for updates...
	I1002 07:16:43.572511  346554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:16:43.575317  346554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:43.578176  346554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:16:43.580964  346554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:16:43.583787  346554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:16:43.587186  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:43.587749  346554 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:16:43.619258  346554 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:16:43.619425  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.676323  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.665454213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.676450  346554 docker.go:318] overlay module found
	I1002 07:16:43.679463  346554 out.go:179] * Using the docker driver based on existing profile
	I1002 07:16:43.682328  346554 start.go:304] selected driver: docker
	I1002 07:16:43.682357  346554 start.go:924] validating driver "docker" against &{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.682550  346554 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:16:43.682661  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.739766  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.730208669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.740206  346554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:16:43.740241  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:43.740306  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:43.740357  346554 start.go:348] cluster config:
	{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.743601  346554 out.go:179] * Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	I1002 07:16:43.746399  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:43.749341  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:43.752288  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:43.752352  346554 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:16:43.752374  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:43.752377  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:43.752484  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:43.752495  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:43.752642  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:43.772750  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:43.772775  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:43.772803  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:43.772827  346554 start.go:360] acquireMachinesLock for ha-550225: {Name:mkc1f009b4f35f6b87d580d72d0a621c44a033f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:43.772899  346554 start.go:364] duration metric: took 46.236µs to acquireMachinesLock for "ha-550225"
	I1002 07:16:43.772922  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:43.772934  346554 fix.go:54] fixHost starting: 
	I1002 07:16:43.773187  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:43.794446  346554 fix.go:112] recreateIfNeeded on ha-550225: state=Stopped err=<nil>
	W1002 07:16:43.794478  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:43.797824  346554 out.go:252] * Restarting existing docker container for "ha-550225" ...
	I1002 07:16:43.797912  346554 cli_runner.go:164] Run: docker start ha-550225
	I1002 07:16:44.052064  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:44.071577  346554 kic.go:430] container "ha-550225" state is running.
	I1002 07:16:44.071977  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:44.097000  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:44.097247  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:44.097316  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:44.119603  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:44.120087  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:44.120103  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:44.120661  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57572->127.0.0.1:33188: read: connection reset by peer
	I1002 07:16:47.250760  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.250786  346554 ubuntu.go:182] provisioning hostname "ha-550225"
	I1002 07:16:47.250888  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.268212  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.268525  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.268543  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225 && echo "ha-550225" | sudo tee /etc/hostname
	I1002 07:16:47.408749  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.408837  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.428229  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.428559  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.428582  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:47.563394  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:47.563422  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:47.563445  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:47.563480  346554 provision.go:84] configureAuth start
	I1002 07:16:47.563555  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:47.583742  346554 provision.go:143] copyHostCerts
	I1002 07:16:47.583804  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583843  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:47.583865  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583942  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:47.584044  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584067  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:47.584076  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584105  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:47.584165  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584188  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:47.584197  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584232  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:47.584294  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225 san=[127.0.0.1 192.168.49.2 ha-550225 localhost minikube]
	I1002 07:16:49.085710  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:49.085804  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:49.085919  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.102600  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.203033  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:49.203111  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:49.220709  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:49.220773  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:16:49.238283  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:49.238380  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:49.255763  346554 provision.go:87] duration metric: took 1.692265184s to configureAuth
	I1002 07:16:49.255832  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:49.256105  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:49.256221  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.273296  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:49.273613  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:49.273636  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:49.545258  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:49.545281  346554 machine.go:96] duration metric: took 5.448016594s to provisionDockerMachine
	I1002 07:16:49.545292  346554 start.go:293] postStartSetup for "ha-550225" (driver="docker")
	I1002 07:16:49.545335  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:49.545400  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:49.545448  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.562765  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.663440  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:49.667012  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:49.667043  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:49.667055  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:49.667131  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:49.667227  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:49.667243  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:49.667356  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:49.675157  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:49.693566  346554 start.go:296] duration metric: took 148.259083ms for postStartSetup
	I1002 07:16:49.693674  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:49.693733  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.711628  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.808263  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:49.813222  346554 fix.go:56] duration metric: took 6.040285845s for fixHost
	I1002 07:16:49.813250  346554 start.go:83] releasing machines lock for "ha-550225", held for 6.040338171s
	I1002 07:16:49.813321  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:49.832086  346554 ssh_runner.go:195] Run: cat /version.json
	I1002 07:16:49.832138  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.832170  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:49.832223  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.860178  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.874339  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.958866  346554 ssh_runner.go:195] Run: systemctl --version
	I1002 07:16:50.049981  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:50.088401  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:50.093782  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:50.093888  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:50.102679  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:50.102707  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:50.102739  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:50.102790  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:50.119025  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:50.132406  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:50.132508  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:50.147702  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:50.161840  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:50.285662  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:16:50.412243  346554 docker.go:234] disabling docker service ...
	I1002 07:16:50.412358  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:16:50.429880  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:16:50.443435  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:16:50.570143  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:16:50.705200  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:16:50.718349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:16:50.732391  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:16:50.732489  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.741688  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:16:50.741842  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.751301  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.760089  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.769286  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:16:50.777484  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.786723  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.795606  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.804393  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:16:50.812287  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:16:50.819774  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:50.940841  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:16:51.084825  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:16:51.084933  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:16:51.088952  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:16:51.089022  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:16:51.093255  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:16:51.121871  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:16:51.122035  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.151306  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.186151  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:16:51.188993  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:16:51.205719  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:16:51.209600  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.219722  346554 kubeadm.go:883] updating cluster {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:16:51.219870  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:51.219932  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.259348  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.259373  346554 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:16:51.259435  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.285823  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.285850  346554 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:16:51.285860  346554 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:16:51.285975  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:16:51.286067  346554 ssh_runner.go:195] Run: crio config
	I1002 07:16:51.349840  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:51.349864  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:51.349907  346554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:16:51.349941  346554 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-550225 NodeName:ha-550225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:16:51.350123  346554 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-550225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:16:51.350149  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:16:51.350220  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:16:51.362455  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:51.362590  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:16:51.362683  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:16:51.370716  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:16:51.370824  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 07:16:51.378562  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:16:51.392384  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:16:51.405890  346554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1002 07:16:51.418852  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:16:51.431748  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:16:51.435456  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.445200  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:51.564279  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:16:51.580309  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.2
	I1002 07:16:51.580335  346554 certs.go:195] generating shared ca certs ...
	I1002 07:16:51.580352  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:51.580577  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:16:51.580643  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:16:51.580658  346554 certs.go:257] generating profile certs ...
	I1002 07:16:51.580760  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:16:51.580851  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa
	I1002 07:16:51.580915  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:16:51.580931  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:16:51.580960  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:16:51.580981  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:16:51.581001  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:16:51.581029  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:16:51.581060  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:16:51.581082  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:16:51.581099  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:16:51.581172  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:16:51.581223  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:16:51.581238  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:16:51.581269  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:16:51.581323  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:16:51.581355  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:16:51.581425  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:51.581476  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.581497  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.581511  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.582046  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:16:51.608528  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:16:51.630032  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:16:51.651693  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:16:51.672816  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:16:51.694334  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:16:51.713045  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:16:51.734929  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:16:51.759074  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:16:51.783798  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:16:51.810129  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:16:51.829572  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:16:51.844038  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:16:51.850521  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:16:51.859107  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863052  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863200  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.905139  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:16:51.915686  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:16:51.924646  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928631  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928697  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.970474  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:16:51.979037  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:16:51.988282  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992329  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992400  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:16:52.034608  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:16:52.043437  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:16:52.047807  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:16:52.090171  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:16:52.132189  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:16:52.173672  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:16:52.215246  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:16:52.259493  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:16:52.303359  346554 kubeadm.go:400] StartCluster: {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:52.303541  346554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:16:52.303637  346554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:16:52.411948  346554 cri.go:89] found id: ""
	I1002 07:16:52.412087  346554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:16:52.423926  346554 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:16:52.423985  346554 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:16:52.424072  346554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:16:52.435971  346554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:52.436519  346554 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-550225" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.436691  346554 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-550225" cluster setting kubeconfig missing "ha-550225" context setting]
	I1002 07:16:52.436999  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.437624  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:16:52.438178  346554 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:16:52.438372  346554 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:16:52.438396  346554 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:16:52.438439  346554 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:16:52.438479  346554 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:16:52.438242  346554 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:16:52.438946  346554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:16:52.453843  346554 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:16:52.453908  346554 kubeadm.go:601] duration metric: took 29.902711ms to restartPrimaryControlPlane
	I1002 07:16:52.454041  346554 kubeadm.go:402] duration metric: took 150.691034ms to StartCluster
	I1002 07:16:52.454081  346554 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.454172  346554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.454859  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.455192  346554 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:16:52.455245  346554 start.go:241] waiting for startup goroutines ...
	I1002 07:16:52.455279  346554 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:16:52.455778  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.480332  346554 out.go:179] * Enabled addons: 
	I1002 07:16:52.484238  346554 addons.go:514] duration metric: took 28.941955ms for enable addons: enabled=[]
	I1002 07:16:52.484336  346554 start.go:246] waiting for cluster config update ...
	I1002 07:16:52.484369  346554 start.go:255] writing updated cluster config ...
	I1002 07:16:52.488274  346554 out.go:203] 
	I1002 07:16:52.492458  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.492645  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.496127  346554 out.go:179] * Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	I1002 07:16:52.499195  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:52.502435  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:52.505497  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:52.505566  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:52.505677  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:52.505807  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:52.505838  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:52.506003  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.530361  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:52.530380  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:52.530392  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:52.530415  346554 start.go:360] acquireMachinesLock for ha-550225-m02: {Name:mk11ef625bc214163cbeacdb736ddec4214a8374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:52.530475  346554 start.go:364] duration metric: took 37.3µs to acquireMachinesLock for "ha-550225-m02"
	I1002 07:16:52.530499  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:52.530506  346554 fix.go:54] fixHost starting: m02
	I1002 07:16:52.530790  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:52.559198  346554 fix.go:112] recreateIfNeeded on ha-550225-m02: state=Stopped err=<nil>
	W1002 07:16:52.559226  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:52.563143  346554 out.go:252] * Restarting existing docker container for "ha-550225-m02" ...
	I1002 07:16:52.563247  346554 cli_runner.go:164] Run: docker start ha-550225-m02
	I1002 07:16:52.985736  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:53.019972  346554 kic.go:430] container "ha-550225-m02" state is running.
	I1002 07:16:53.020350  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:53.045172  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:53.045437  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:53.045501  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:53.087166  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:53.087519  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:53.087528  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:53.088138  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45188->127.0.0.1:33193: read: connection reset by peer
	I1002 07:16:56.311713  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.311782  346554 ubuntu.go:182] provisioning hostname "ha-550225-m02"
	I1002 07:16:56.311878  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.344609  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.344917  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.344929  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225-m02 && echo "ha-550225-m02" | sudo tee /etc/hostname
	I1002 07:16:56.639669  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.639788  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.668649  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.668967  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.668991  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:56.892812  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:56.892848  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:56.892865  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:56.892886  346554 provision.go:84] configureAuth start
	I1002 07:16:56.892966  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:56.931268  346554 provision.go:143] copyHostCerts
	I1002 07:16:56.931313  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931346  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:56.931357  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931436  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:56.931520  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931541  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:56.931548  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931576  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:56.931619  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931640  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:56.931645  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931673  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:56.931727  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225-m02 san=[127.0.0.1 192.168.49.3 ha-550225-m02 localhost minikube]
	I1002 07:16:57.380087  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:57.380161  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:57.380209  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.399377  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:57.503607  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:57.503674  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:57.534864  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:57.534935  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:16:57.579624  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:57.579686  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:57.613798  346554 provision.go:87] duration metric: took 720.891298ms to configureAuth
	I1002 07:16:57.613866  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:57.614125  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:57.614268  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.655334  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:57.655649  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:57.655669  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:58.296218  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:58.296241  346554 machine.go:96] duration metric: took 5.250794733s to provisionDockerMachine
	I1002 07:16:58.296266  346554 start.go:293] postStartSetup for "ha-550225-m02" (driver="docker")
	I1002 07:16:58.296279  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:58.296361  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:58.296407  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.334246  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.454625  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:58.462912  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:58.462946  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:58.462957  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:58.463024  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:58.463132  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:58.463146  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:58.463245  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:58.476350  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:58.502934  346554 start.go:296] duration metric: took 206.651168ms for postStartSetup
	I1002 07:16:58.503074  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:58.503140  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.541010  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.704044  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:58.724725  346554 fix.go:56] duration metric: took 6.194210695s for fixHost
	I1002 07:16:58.724751  346554 start.go:83] releasing machines lock for "ha-550225-m02", held for 6.194264053s
	I1002 07:16:58.724830  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:58.757236  346554 out.go:179] * Found network options:
	I1002 07:16:58.760259  346554 out.go:179]   - NO_PROXY=192.168.49.2
	W1002 07:16:58.763701  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	W1002 07:16:58.763752  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	I1002 07:16:58.763820  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:58.763852  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:58.763870  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.763907  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.799805  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.800051  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:59.297366  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:59.320265  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:59.320354  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:59.335012  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:59.335039  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:59.335070  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:59.335161  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:59.357972  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:59.378445  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:59.378521  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:59.402692  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:59.423049  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:59.777657  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:17:00.088553  346554 docker.go:234] disabling docker service ...
	I1002 07:17:00.088656  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:17:00.130593  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:17:00.210008  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:17:00.633988  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:17:01.021589  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:17:01.054167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:17:01.092894  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:17:01.092980  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.111830  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:17:01.111928  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.139965  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.151897  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.168595  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:17:01.186410  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.204646  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.221763  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.236700  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:17:01.257944  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:17:01.272835  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:17:01.618372  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:18:32.051852  346554 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.433435555s)
	I1002 07:18:32.051878  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:18:32.051938  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:18:32.056156  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:18:32.056222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:18:32.060117  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:18:32.088770  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:18:32.088860  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.119432  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.154051  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:18:32.156909  346554 out.go:179]   - env NO_PROXY=192.168.49.2
	I1002 07:18:32.159957  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:18:32.177164  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:18:32.181230  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:32.191471  346554 mustload.go:65] Loading cluster: ha-550225
	I1002 07:18:32.191729  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:32.191999  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:18:32.209130  346554 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:18:32.209416  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.3
	I1002 07:18:32.209433  346554 certs.go:195] generating shared ca certs ...
	I1002 07:18:32.209448  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:18:32.209574  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:18:32.209622  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:18:32.209635  346554 certs.go:257] generating profile certs ...
	I1002 07:18:32.209712  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:18:32.209761  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.e172f685
	I1002 07:18:32.209802  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:18:32.209816  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:18:32.209829  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:18:32.209843  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:18:32.209855  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:18:32.209869  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:18:32.209883  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:18:32.209898  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:18:32.209908  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:18:32.209964  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:18:32.209998  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:18:32.210010  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:18:32.210033  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:18:32.210061  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:18:32.210089  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:18:32.210137  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:18:32.210168  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.210187  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.210198  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.210261  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:18:32.227689  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:18:32.315413  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1002 07:18:32.319445  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1002 07:18:32.328111  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1002 07:18:32.331777  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1002 07:18:32.340081  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1002 07:18:32.343746  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1002 07:18:32.351558  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1002 07:18:32.354911  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1002 07:18:32.362878  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1002 07:18:32.366632  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1002 07:18:32.374581  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1002 07:18:32.378281  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1002 07:18:32.386552  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:18:32.405394  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:18:32.422759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:18:32.440360  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:18:32.457759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:18:32.475843  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:18:32.493288  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:18:32.510289  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:18:32.527991  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:18:32.545549  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:18:32.562952  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:18:32.580383  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1002 07:18:32.593477  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1002 07:18:32.606933  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1002 07:18:32.619772  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1002 07:18:32.634020  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1002 07:18:32.646873  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1002 07:18:32.659836  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1002 07:18:32.673417  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:18:32.679719  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:18:32.688081  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692003  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692135  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.733286  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:18:32.741334  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:18:32.749624  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753431  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753505  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.794364  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:18:32.802247  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:18:32.810290  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813847  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813927  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.854739  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:18:32.862471  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:18:32.866281  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:18:32.907787  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:18:32.948617  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:18:32.989448  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:18:33.030881  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:18:33.074016  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:18:33.117026  346554 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1002 07:18:33.117170  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:18:33.117220  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:18:33.117288  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:18:33.133837  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:18:33.133931  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:18:33.134029  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:18:33.142503  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:18:33.142627  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1002 07:18:33.150436  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 07:18:33.163196  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:18:33.176800  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:18:33.191119  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:18:33.195012  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:33.205076  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.339361  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.353170  346554 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:18:33.353495  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:33.359500  346554 out.go:179] * Verifying Kubernetes components...
	I1002 07:18:33.362288  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.491257  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.505467  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1002 07:18:33.505560  346554 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1002 07:18:33.505989  346554 node_ready.go:35] waiting up to 6m0s for node "ha-550225-m02" to be "Ready" ...
	W1002 07:18:35.506749  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:38.010468  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:40.016084  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:42.506872  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:44.507212  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:47.007659  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:49.506544  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:51.506605  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:18:54.785251  346554 node_ready.go:49] node "ha-550225-m02" is "Ready"
	I1002 07:18:54.785285  346554 node_ready.go:38] duration metric: took 21.279267345s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:18:54.785300  346554 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:18:54.785382  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.286257  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.786278  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.285480  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.785495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.286432  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.786472  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.285596  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.786260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.286148  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.785674  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.286401  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.786468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.286310  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.786133  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.285476  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.285578  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.785477  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.285835  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.786152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.285495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.785558  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.285602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.785496  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.286468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.786358  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.286294  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.786349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.286208  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.786292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.285577  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.785589  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.286341  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.286415  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.786007  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.286205  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.786328  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.285849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.786397  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.285488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.785431  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.285445  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.785468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.285527  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.785637  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.285535  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.786137  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.286152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.786052  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.785522  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.285716  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.786849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.286372  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.786418  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.286092  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.786120  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.285506  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.785439  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.286469  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.785780  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.785611  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.286260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.785499  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.285509  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.785521  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.285762  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.786049  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.286329  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.785543  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.285473  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.786013  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.285818  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.785931  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.285557  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.786122  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:33.786216  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:33.819648  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:33.819668  346554 cri.go:89] found id: ""
	I1002 07:19:33.819678  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:33.819746  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.823889  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:33.823960  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:33.855251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:33.855272  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:33.855277  346554 cri.go:89] found id: ""
	I1002 07:19:33.855285  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:33.855351  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.858992  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.862888  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:33.862975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:33.894144  346554 cri.go:89] found id: ""
	I1002 07:19:33.894169  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.894178  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:33.894184  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:33.894243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:33.921104  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:33.921125  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:33.921130  346554 cri.go:89] found id: ""
	I1002 07:19:33.921137  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:33.921194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.925016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.928536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:33.928631  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:33.961082  346554 cri.go:89] found id: ""
	I1002 07:19:33.961111  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.961121  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:33.961127  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:33.961187  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:33.993876  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:33.993901  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:33.993906  346554 cri.go:89] found id: ""
	I1002 07:19:33.993916  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:33.993979  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.999741  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:34.004783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:34.004869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:34.034228  346554 cri.go:89] found id: ""
	I1002 07:19:34.034256  346554 logs.go:282] 0 containers: []
	W1002 07:19:34.034265  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:34.034275  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:34.034288  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:34.096737  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:34.096779  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:34.132301  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:34.132339  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:34.182701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:34.182737  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:34.217015  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:34.217044  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:34.232712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:34.232741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:34.652633  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:34.652655  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:34.652669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:34.681086  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:34.681118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:34.708033  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:34.708062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:34.793299  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:34.793407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:34.848620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:34.848649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:34.948533  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:34.948572  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.477483  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:37.488961  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:37.489035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:37.518325  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.518349  346554 cri.go:89] found id: ""
	I1002 07:19:37.518358  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:37.518419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.522140  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:37.522269  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:37.549073  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.549093  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.549098  346554 cri.go:89] found id: ""
	I1002 07:19:37.549105  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:37.549190  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.552869  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.556417  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:37.556497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:37.589096  346554 cri.go:89] found id: ""
	I1002 07:19:37.589122  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.589130  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:37.589137  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:37.589199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:37.615330  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:37.615354  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.615360  346554 cri.go:89] found id: ""
	I1002 07:19:37.615367  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:37.615424  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.619166  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.622673  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:37.622742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:37.648426  346554 cri.go:89] found id: ""
	I1002 07:19:37.648458  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.648467  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:37.648474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:37.648536  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:37.676515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:37.676536  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:37.676541  346554 cri.go:89] found id: ""
	I1002 07:19:37.676549  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:37.676605  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.680280  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.684478  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:37.684552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:37.710689  346554 cri.go:89] found id: ""
	I1002 07:19:37.710713  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.710722  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:37.710731  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:37.710741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:37.807134  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:37.807171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:37.877814  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:37.877839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:37.877853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.920820  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:37.920854  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.956765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:37.956802  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.985482  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:37.985510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:38.017517  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:38.017548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:38.100846  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:38.100884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:38.136290  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:38.136318  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:38.151732  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:38.151763  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:38.177792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:38.177822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:38.229226  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:38.229260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.756410  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:40.767378  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:40.767448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:40.799187  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:40.799205  346554 cri.go:89] found id: ""
	I1002 07:19:40.799213  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:40.799268  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.804369  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:40.804454  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:40.830559  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:40.830628  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:40.830652  346554 cri.go:89] found id: ""
	I1002 07:19:40.830679  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:40.830771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.835205  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.839714  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:40.839827  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:40.867014  346554 cri.go:89] found id: ""
	I1002 07:19:40.867039  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.867048  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:40.867054  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:40.867141  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:40.905810  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:40.905829  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:40.905835  346554 cri.go:89] found id: ""
	I1002 07:19:40.905842  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:40.905898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.909648  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.913397  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:40.913471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:40.940488  346554 cri.go:89] found id: ""
	I1002 07:19:40.940511  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.940520  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:40.940526  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:40.940585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:40.968408  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:40.968429  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.968439  346554 cri.go:89] found id: ""
	I1002 07:19:40.968447  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:40.968503  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.972336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.976070  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:40.976163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:41.010288  346554 cri.go:89] found id: ""
	I1002 07:19:41.010318  346554 logs.go:282] 0 containers: []
	W1002 07:19:41.010328  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:41.010338  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:41.010353  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:41.058706  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:41.058741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:41.085223  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:41.085252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:41.117537  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:41.117564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:41.218224  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:41.218265  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:41.234686  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:41.234727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:41.270240  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:41.270276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:41.321885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:41.321922  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:41.350649  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:41.350684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:41.382710  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:41.382740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:41.465872  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:41.465911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:41.547196  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:41.547220  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:41.547234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.074126  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:44.087746  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:44.087861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:44.116198  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.116223  346554 cri.go:89] found id: ""
	I1002 07:19:44.116232  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:44.116290  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.120227  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:44.120325  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:44.146916  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.146943  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.146948  346554 cri.go:89] found id: ""
	I1002 07:19:44.146955  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:44.147009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.151266  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.155925  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:44.156012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:44.190430  346554 cri.go:89] found id: ""
	I1002 07:19:44.190458  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.190467  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:44.190473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:44.190529  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:44.219366  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.219387  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.219392  346554 cri.go:89] found id: ""
	I1002 07:19:44.219400  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:44.219455  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.223324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.226924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:44.227000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:44.252543  346554 cri.go:89] found id: ""
	I1002 07:19:44.252566  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.252576  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:44.252583  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:44.252650  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:44.280466  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.280489  346554 cri.go:89] found id: ""
	I1002 07:19:44.280498  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:44.280559  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.284050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:44.284122  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:44.314223  346554 cri.go:89] found id: ""
	I1002 07:19:44.314250  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.314259  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:44.314269  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:44.314304  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.340933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:44.340965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.377320  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:44.377352  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.411349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:44.411377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:44.516647  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:44.516695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:44.585736  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:44.585771  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:44.585785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.629867  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:44.629909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.681709  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:44.681750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.710536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:44.710566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:44.801698  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:44.801744  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:44.834146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:44.834175  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:47.351602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:47.362458  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:47.362546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:47.391769  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.391792  346554 cri.go:89] found id: ""
	I1002 07:19:47.391802  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:47.391863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.395882  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:47.395971  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:47.428129  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:47.428151  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.428156  346554 cri.go:89] found id: ""
	I1002 07:19:47.428164  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:47.428225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.432313  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.436344  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:47.436415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:47.464208  346554 cri.go:89] found id: ""
	I1002 07:19:47.464230  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.464238  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:47.464244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:47.464302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:47.494674  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.494731  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:47.494773  346554 cri.go:89] found id: ""
	I1002 07:19:47.494800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:47.494885  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.499610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.503658  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:47.503779  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:47.532490  346554 cri.go:89] found id: ""
	I1002 07:19:47.532517  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.532527  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:47.532534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:47.532599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:47.565084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:47.565122  346554 cri.go:89] found id: ""
	I1002 07:19:47.565131  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:47.565231  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.569404  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:47.569483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:47.597243  346554 cri.go:89] found id: ""
	I1002 07:19:47.597266  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.597275  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:47.597284  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:47.597294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:47.693710  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:47.693748  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:47.771715  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:47.771739  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:47.771752  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.810005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:47.810090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.890792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:47.890824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.977230  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:47.977271  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:48.018612  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:48.018643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:48.105364  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:48.105401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:48.124841  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:48.124870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:48.193027  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:48.193069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:48.239251  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:48.239279  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:50.782662  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:50.794011  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:50.794105  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:50.838191  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:50.838216  346554 cri.go:89] found id: ""
	I1002 07:19:50.838225  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:50.838286  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.842655  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:50.842755  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:50.891807  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:50.891833  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:50.891839  346554 cri.go:89] found id: ""
	I1002 07:19:50.891847  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:50.891964  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.899196  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.904048  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:50.904143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:50.939603  346554 cri.go:89] found id: ""
	I1002 07:19:50.939626  346554 logs.go:282] 0 containers: []
	W1002 07:19:50.939635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:50.939641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:50.939735  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:50.971030  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:50.971053  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:50.971059  346554 cri.go:89] found id: ""
	I1002 07:19:50.971067  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:50.971179  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.975612  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.980140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:50.980242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:51.025029  346554 cri.go:89] found id: ""
	I1002 07:19:51.025055  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.025064  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:51.025071  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:51.025186  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:51.058743  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.058764  346554 cri.go:89] found id: ""
	I1002 07:19:51.058772  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:51.058862  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:51.064931  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:51.065035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:51.101431  346554 cri.go:89] found id: ""
	I1002 07:19:51.101462  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.101486  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:51.101498  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:51.101531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:51.126461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:51.126494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:51.217174  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:51.217200  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:51.217216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:51.279369  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:51.279449  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:51.337216  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:51.337253  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:51.425630  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:51.425669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:51.528560  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:51.528601  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:51.556690  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:51.556719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:51.600118  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:51.600251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:51.632616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:51.632650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.662904  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:51.662935  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.196274  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:54.207476  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:54.207546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:54.238643  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.238664  346554 cri.go:89] found id: ""
	I1002 07:19:54.238673  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:54.238729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.242382  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:54.242456  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:54.274345  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.274377  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.274383  346554 cri.go:89] found id: ""
	I1002 07:19:54.274390  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:54.274451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.278686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.283146  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:54.283225  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:54.315609  346554 cri.go:89] found id: ""
	I1002 07:19:54.315635  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.315645  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:54.315652  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:54.315718  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:54.343684  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.343709  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.343715  346554 cri.go:89] found id: ""
	I1002 07:19:54.343723  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:54.343789  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.347649  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.351327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:54.351428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:54.380301  346554 cri.go:89] found id: ""
	I1002 07:19:54.380336  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.380346  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:54.380353  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:54.380440  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:54.413081  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:54.413105  346554 cri.go:89] found id: ""
	I1002 07:19:54.413114  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:54.413172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.417107  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:54.417181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:54.450903  346554 cri.go:89] found id: ""
	I1002 07:19:54.450930  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.450947  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:54.450957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:54.450972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:54.551509  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:54.551550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:54.567991  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:54.568018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:54.641344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:54.641366  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:54.641403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.677557  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:54.677592  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.742382  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:54.742417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:54.830648  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:54.830681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.866699  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:54.866727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.893138  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:54.893166  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.942885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:54.942920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.977070  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:54.977098  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.528866  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:57.540731  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:57.540803  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:57.571921  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:57.571945  346554 cri.go:89] found id: ""
	I1002 07:19:57.571954  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:57.572028  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.575942  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:57.576018  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:57.604185  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.604219  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.604224  346554 cri.go:89] found id: ""
	I1002 07:19:57.604232  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:57.604326  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.608202  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.611833  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:57.611912  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:57.640401  346554 cri.go:89] found id: ""
	I1002 07:19:57.640431  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.640440  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:57.640447  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:57.640519  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:57.671538  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:57.671560  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:57.671565  346554 cri.go:89] found id: ""
	I1002 07:19:57.671572  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:57.671629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.675430  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.679760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:57.679837  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:57.707483  346554 cri.go:89] found id: ""
	I1002 07:19:57.707511  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.707521  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:57.707527  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:57.707592  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:57.736308  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.736330  346554 cri.go:89] found id: ""
	I1002 07:19:57.736338  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:57.736407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.740334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:57.740505  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:57.771488  346554 cri.go:89] found id: ""
	I1002 07:19:57.771558  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.771575  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:57.771585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:57.771599  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.824974  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:57.825013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.862787  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:57.862825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.891348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:57.891374  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:57.923682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:57.923711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:57.996115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:57.996139  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:57.996155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:58.033126  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:58.033198  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:58.106377  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:58.106415  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:58.139224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:58.139252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:58.226478  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:58.226525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:58.331297  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:58.331338  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:00.847448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:00.859829  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:00.859905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:00.887965  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:00.888039  346554 cri.go:89] found id: ""
	I1002 07:20:00.888063  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:00.888133  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.892548  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:00.892623  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:00.922567  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:00.922586  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:00.922591  346554 cri.go:89] found id: ""
	I1002 07:20:00.922598  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:00.922653  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.926435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.930250  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:00.930339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:00.959728  346554 cri.go:89] found id: ""
	I1002 07:20:00.959759  346554 logs.go:282] 0 containers: []
	W1002 07:20:00.959769  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:00.959777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:00.959861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:00.988254  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:00.988317  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:00.988338  346554 cri.go:89] found id: ""
	I1002 07:20:00.988365  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:00.988466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.993016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.996699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:00.996818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:01.024791  346554 cri.go:89] found id: ""
	I1002 07:20:01.024815  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.024823  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:01.024849  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:01.024931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:01.056703  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.056728  346554 cri.go:89] found id: ""
	I1002 07:20:01.056737  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:01.056820  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:01.061200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:01.061302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:01.092652  346554 cri.go:89] found id: ""
	I1002 07:20:01.092680  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.092690  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:01.092701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:01.092715  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.121048  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:01.121084  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:01.227967  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:01.228007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:01.246697  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:01.246728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:01.299528  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:01.299606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:01.329789  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:01.329875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:01.412310  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:01.412348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:01.449621  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:01.449651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:01.528807  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:01.528832  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:01.528848  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:01.557543  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:01.557575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:01.606902  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:01.607007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.163648  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:04.175704  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:04.175798  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:04.202895  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.202920  346554 cri.go:89] found id: ""
	I1002 07:20:04.202929  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:04.202988  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.206773  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:04.206847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:04.237461  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.237484  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.237490  346554 cri.go:89] found id: ""
	I1002 07:20:04.237497  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:04.237551  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.241192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.244646  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:04.244721  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:04.271145  346554 cri.go:89] found id: ""
	I1002 07:20:04.271172  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.271181  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:04.271188  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:04.271290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:04.301758  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.301787  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.301792  346554 cri.go:89] found id: ""
	I1002 07:20:04.301800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:04.301858  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.305658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.309360  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:04.309437  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:04.339291  346554 cri.go:89] found id: ""
	I1002 07:20:04.339317  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.339339  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:04.339347  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:04.339417  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:04.366771  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.366841  346554 cri.go:89] found id: ""
	I1002 07:20:04.366866  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:04.366961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.371032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:04.371213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:04.396810  346554 cri.go:89] found id: ""
	I1002 07:20:04.396889  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.396905  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:04.396916  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:04.396933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:04.414258  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:04.414291  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.478315  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:04.478395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.536808  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:04.536847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.564995  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:04.565025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.592902  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:04.592931  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:04.671813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:04.671849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:04.710652  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:04.710684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:04.820627  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:04.820664  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:04.897187  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:04.897212  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:04.897229  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.936329  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:04.936358  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.496901  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:07.514473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:07.514547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:07.540993  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.541017  346554 cri.go:89] found id: ""
	I1002 07:20:07.541025  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:07.541109  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.545015  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:07.545090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:07.572646  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.572670  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.572675  346554 cri.go:89] found id: ""
	I1002 07:20:07.572683  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:07.572763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.576707  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.580612  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:07.580684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:07.606885  346554 cri.go:89] found id: ""
	I1002 07:20:07.606909  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.606917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:07.606923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:07.606980  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:07.633971  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.634051  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.634072  346554 cri.go:89] found id: ""
	I1002 07:20:07.634115  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:07.634212  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.638009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.641489  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:07.641558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:07.669226  346554 cri.go:89] found id: ""
	I1002 07:20:07.669252  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.669262  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:07.669269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:07.669328  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:07.697084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.697110  346554 cri.go:89] found id: ""
	I1002 07:20:07.697119  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:07.697218  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.702023  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:07.702125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:07.729244  346554 cri.go:89] found id: ""
	I1002 07:20:07.729270  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.729279  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:07.729289  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:07.729305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.774187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:07.774226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.840113  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:07.840153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.873716  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:07.873757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:07.891261  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:07.891289  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.916233  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:07.916263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.952299  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:07.952332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.986719  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:07.986746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:08.071303  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:08.071345  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:08.108002  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:08.108028  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:08.210536  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:08.210576  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:08.294093  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:10.795316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:10.809081  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:10.809162  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:10.842834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:10.842857  346554 cri.go:89] found id: ""
	I1002 07:20:10.842866  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:10.842923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.846661  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:10.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:10.885119  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:10.885154  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:10.885160  346554 cri.go:89] found id: ""
	I1002 07:20:10.885167  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:10.885227  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.888993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.892673  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:10.892745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:10.919884  346554 cri.go:89] found id: ""
	I1002 07:20:10.919910  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.919920  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:10.919926  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:10.919986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:10.948791  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:10.948813  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:10.948818  346554 cri.go:89] found id: ""
	I1002 07:20:10.948832  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:10.948888  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.952760  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.956362  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:10.956465  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:10.984495  346554 cri.go:89] found id: ""
	I1002 07:20:10.984518  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.984528  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:10.984535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:10.984636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:11.017757  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.017840  346554 cri.go:89] found id: ""
	I1002 07:20:11.017854  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:11.017923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:11.022016  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:11.022121  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:11.049783  346554 cri.go:89] found id: ""
	I1002 07:20:11.049807  346554 logs.go:282] 0 containers: []
	W1002 07:20:11.049816  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:11.049826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:11.049858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:11.130029  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:11.130050  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:11.130065  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:11.158585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:11.158617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:11.206663  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:11.206698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:11.251780  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:11.251812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:11.320488  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:11.320524  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:11.401025  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:11.401061  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:11.509831  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:11.509925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:11.528908  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:11.528984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:11.560309  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:11.560340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.587476  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:11.587505  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.117921  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:14.129181  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:14.129256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:14.155142  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.155165  346554 cri.go:89] found id: ""
	I1002 07:20:14.155174  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:14.155234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.158996  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:14.159072  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:14.187368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.187439  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.187451  346554 cri.go:89] found id: ""
	I1002 07:20:14.187459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:14.187516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.191550  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.195394  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:14.195489  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:14.221702  346554 cri.go:89] found id: ""
	I1002 07:20:14.221731  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.221741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:14.221748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:14.221805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:14.250745  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.250768  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:14.250774  346554 cri.go:89] found id: ""
	I1002 07:20:14.250781  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:14.250840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.254464  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.257656  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:14.257732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:14.287657  346554 cri.go:89] found id: ""
	I1002 07:20:14.287684  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.287693  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:14.287699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:14.287763  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:14.317647  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.317670  346554 cri.go:89] found id: ""
	I1002 07:20:14.317680  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:14.317738  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.321550  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:14.321664  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:14.347420  346554 cri.go:89] found id: ""
	I1002 07:20:14.347445  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.347455  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:14.347465  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:14.347476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:14.428069  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:14.428106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.482408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:14.482447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.534003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:14.534036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.587616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:14.587652  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.615153  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:14.615189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.649482  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:14.649517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:14.745400  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:14.745440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:14.765273  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:14.765307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:14.841087  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:14.841109  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:14.841123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.867206  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:14.867236  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.396729  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:17.407809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:17.407882  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:17.435626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.435649  346554 cri.go:89] found id: ""
	I1002 07:20:17.435667  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:17.435729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.440093  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:17.440173  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:17.481710  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.481732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.481738  346554 cri.go:89] found id: ""
	I1002 07:20:17.481745  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:17.481808  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.488857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.492676  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:17.492748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:17.535179  346554 cri.go:89] found id: ""
	I1002 07:20:17.535251  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.535277  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:17.535317  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:17.535404  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:17.567305  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:17.567330  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.567335  346554 cri.go:89] found id: ""
	I1002 07:20:17.567343  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:17.567405  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.572504  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.576436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:17.576540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:17.604459  346554 cri.go:89] found id: ""
	I1002 07:20:17.604489  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.604498  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:17.604504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:17.604568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:17.632230  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.632254  346554 cri.go:89] found id: ""
	I1002 07:20:17.632263  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:17.632352  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.636309  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:17.636416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:17.664031  346554 cri.go:89] found id: ""
	I1002 07:20:17.664058  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.664068  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:17.664078  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:17.664090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.690836  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:17.690911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.720348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:17.720376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:17.752215  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:17.752295  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:17.855749  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:17.855789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:17.872293  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:17.872320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.923506  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:17.923540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.971187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:17.971220  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:18.041592  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:18.041630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:18.085650  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:18.085682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:18.171333  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:18.171372  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:18.244409  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:20.746282  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:20.757663  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:20.757743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:20.787729  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:20.787751  346554 cri.go:89] found id: ""
	I1002 07:20:20.787760  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:20.787845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.792330  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:20.792424  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:20.829800  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:20.829824  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:20.829830  346554 cri.go:89] found id: ""
	I1002 07:20:20.829838  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:20.829899  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.833952  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.837642  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:20.837723  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:20.867702  346554 cri.go:89] found id: ""
	I1002 07:20:20.867725  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.867734  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:20.867740  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:20.867830  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:20.908994  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:20.909016  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:20.909022  346554 cri.go:89] found id: ""
	I1002 07:20:20.909029  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:20.909085  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.913045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.916567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:20.916643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:20.947545  346554 cri.go:89] found id: ""
	I1002 07:20:20.947571  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.947581  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:20.947588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:20.947651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:20.980904  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:20.980984  346554 cri.go:89] found id: ""
	I1002 07:20:20.980999  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:20.981082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.984909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:20.984982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:21.020855  346554 cri.go:89] found id: ""
	I1002 07:20:21.020878  346554 logs.go:282] 0 containers: []
	W1002 07:20:21.020887  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:21.020896  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:21.020907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:21.117602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:21.117638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:21.192022  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:21.192043  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:21.192057  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:21.276022  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:21.276060  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:21.308782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:21.308822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:21.396093  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:21.396132  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:21.438867  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:21.438900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:21.463876  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:21.463906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:21.500802  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:21.500843  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:21.550471  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:21.550508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:21.590310  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:21.590349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.119676  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:24.131693  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:24.131783  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:24.163845  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.163870  346554 cri.go:89] found id: ""
	I1002 07:20:24.163879  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:24.163939  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.167667  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:24.167742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:24.195635  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.195658  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:24.195664  346554 cri.go:89] found id: ""
	I1002 07:20:24.195672  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:24.195731  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.199786  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.204099  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:24.204199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:24.233690  346554 cri.go:89] found id: ""
	I1002 07:20:24.233716  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.233726  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:24.233733  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:24.233790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:24.262505  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.262565  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.262586  346554 cri.go:89] found id: ""
	I1002 07:20:24.262614  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:24.262691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.266650  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.270417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:24.270511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:24.297687  346554 cri.go:89] found id: ""
	I1002 07:20:24.297713  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.297723  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:24.297729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:24.297790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:24.325175  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.325197  346554 cri.go:89] found id: ""
	I1002 07:20:24.325205  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:24.325284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.329310  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:24.329399  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:24.358432  346554 cri.go:89] found id: ""
	I1002 07:20:24.358458  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.358468  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:24.358477  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:24.358489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.418997  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:24.419034  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.449127  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:24.449155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:24.545814  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:24.545853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:24.561748  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:24.561777  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:24.632202  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:24.632226  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:24.632239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.662637  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:24.662668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:24.740789  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:24.740830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:24.773325  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:24.773357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.807399  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:24.807428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.853933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:24.853972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.396082  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:27.406955  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:27.407027  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:27.435147  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.435171  346554 cri.go:89] found id: ""
	I1002 07:20:27.435180  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:27.435238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.440669  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:27.440745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:27.467109  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.467176  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.467196  346554 cri.go:89] found id: ""
	I1002 07:20:27.467205  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:27.467275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.471217  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.474815  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:27.474888  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:27.503111  346554 cri.go:89] found id: ""
	I1002 07:20:27.503136  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.503145  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:27.503152  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:27.503222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:27.540213  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.540253  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.540260  346554 cri.go:89] found id: ""
	I1002 07:20:27.540276  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:27.540359  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.544590  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.548529  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:27.548605  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:27.577677  346554 cri.go:89] found id: ""
	I1002 07:20:27.577746  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.577772  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:27.577798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:27.577892  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:27.607310  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:27.607329  346554 cri.go:89] found id: ""
	I1002 07:20:27.607337  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:27.607393  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.611619  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:27.611690  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:27.647844  346554 cri.go:89] found id: ""
	I1002 07:20:27.647872  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.647882  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:27.647892  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:27.647905  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:27.723377  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:27.723400  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:27.723419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.750902  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:27.750932  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.804228  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:27.804267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.866989  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:27.867068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.895361  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:27.895393  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:28.004869  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:28.004912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:28.030605  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:28.030637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:28.090494  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:28.090531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:28.120915  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:28.120953  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:28.213702  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:28.213740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:30.746147  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:30.758010  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:30.758090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:30.789909  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:30.789936  346554 cri.go:89] found id: ""
	I1002 07:20:30.789945  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:30.790004  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.794321  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:30.794407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:30.823421  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:30.823445  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:30.823451  346554 cri.go:89] found id: ""
	I1002 07:20:30.823459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:30.823520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.827486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.831334  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:30.831416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:30.857968  346554 cri.go:89] found id: ""
	I1002 07:20:30.857996  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.858005  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:30.858012  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:30.858073  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:30.885972  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:30.885997  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:30.886002  346554 cri.go:89] found id: ""
	I1002 07:20:30.886010  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:30.886074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.891710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.897102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:30.897174  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:30.928917  346554 cri.go:89] found id: ""
	I1002 07:20:30.928944  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.928953  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:30.928960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:30.929079  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:30.957428  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:30.957456  346554 cri.go:89] found id: ""
	I1002 07:20:30.957465  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:30.957524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.961555  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:30.961638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:30.991607  346554 cri.go:89] found id: ""
	I1002 07:20:30.991644  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.991654  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:30.991664  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:30.991682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:31.034696  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:31.034732  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:31.095475  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:31.095521  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:31.124509  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:31.124543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:31.164950  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:31.164982  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:31.242438  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:31.242461  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:31.242475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:31.288791  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:31.288829  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:31.324555  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:31.324590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:31.358683  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:31.358775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:31.442957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:31.443002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:31.546184  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:31.546226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:34.062520  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:34.074346  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:34.074429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:34.104094  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.104116  346554 cri.go:89] found id: ""
	I1002 07:20:34.104124  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:34.104184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.108168  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:34.108242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:34.134780  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.134803  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.134808  346554 cri.go:89] found id: ""
	I1002 07:20:34.134816  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:34.134873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.140158  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.144631  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:34.144709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:34.171174  346554 cri.go:89] found id: ""
	I1002 07:20:34.171197  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.171209  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:34.171216  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:34.171279  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:34.201197  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.201265  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.201279  346554 cri.go:89] found id: ""
	I1002 07:20:34.201289  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:34.201358  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.205487  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.209274  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:34.209371  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:34.236797  346554 cri.go:89] found id: ""
	I1002 07:20:34.236823  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.236832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:34.236839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:34.236899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:34.268130  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.268153  346554 cri.go:89] found id: ""
	I1002 07:20:34.268163  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:34.268221  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.272288  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:34.272494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:34.303012  346554 cri.go:89] found id: ""
	I1002 07:20:34.303036  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.303046  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:34.303057  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:34.303069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.330987  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:34.331016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:34.409294  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:34.409332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:34.444890  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:34.444921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:34.529848  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:34.529873  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:34.529887  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.576746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:34.576783  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.617959  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:34.617994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.680077  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:34.680116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.709769  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:34.709801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.741411  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:34.741440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:34.841059  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:34.841096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.359292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:37.370946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:37.371032  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:37.399137  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.399162  346554 cri.go:89] found id: ""
	I1002 07:20:37.399171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:37.399230  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.403338  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:37.403412  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:37.430753  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.430777  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.430782  346554 cri.go:89] found id: ""
	I1002 07:20:37.430790  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:37.430846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.434756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.440208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:37.440282  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:37.466624  346554 cri.go:89] found id: ""
	I1002 07:20:37.466708  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.466741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:37.466763  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:37.466859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:37.494022  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.494043  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:37.494049  346554 cri.go:89] found id: ""
	I1002 07:20:37.494057  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:37.494137  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.498098  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.502412  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:37.502500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:37.535920  346554 cri.go:89] found id: ""
	I1002 07:20:37.535947  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.535956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:37.535963  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:37.536022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:37.562970  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.562994  346554 cri.go:89] found id: ""
	I1002 07:20:37.563004  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:37.563062  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.567000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:37.567077  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:37.595796  346554 cri.go:89] found id: ""
	I1002 07:20:37.595823  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.595832  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:37.595842  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:37.595875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.622318  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:37.622347  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:37.698567  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:37.698606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:37.730294  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:37.730323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.746780  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:37.746819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.774051  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:37.774082  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.842657  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:37.842692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.879058  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:37.879101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.958213  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:37.958255  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:38.066523  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:38.066564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:38.140589  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:38.140614  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:38.140628  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.668101  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:40.680533  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:40.680613  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:40.709182  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.709201  346554 cri.go:89] found id: ""
	I1002 07:20:40.709217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:40.709275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.714063  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:40.714131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:40.741940  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:40.741960  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:40.741965  346554 cri.go:89] found id: ""
	I1002 07:20:40.741972  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:40.742030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.746103  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.749819  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:40.749890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:40.779806  346554 cri.go:89] found id: ""
	I1002 07:20:40.779869  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.779893  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:40.779918  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:40.779999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:40.818846  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:40.818910  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.818930  346554 cri.go:89] found id: ""
	I1002 07:20:40.818956  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:40.819034  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.825049  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.829111  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:40.829255  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:40.857000  346554 cri.go:89] found id: ""
	I1002 07:20:40.857070  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.857101  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:40.857116  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:40.857204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:40.890997  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:40.891021  346554 cri.go:89] found id: ""
	I1002 07:20:40.891030  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:40.891120  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.902062  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:40.902188  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:40.931155  346554 cri.go:89] found id: ""
	I1002 07:20:40.931192  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.931201  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:40.931258  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:40.931282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.968238  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:40.968267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:41.004537  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:41.004577  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:41.077656  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:41.077693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:41.110709  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:41.110738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:41.146808  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:41.146839  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:41.218315  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:41.218395  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:41.218476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:41.270106  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:41.270141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:41.300977  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:41.301007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:41.385349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:41.385387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:41.485614  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:41.485658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.002362  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:44.017480  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:44.017558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:44.055626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.055653  346554 cri.go:89] found id: ""
	I1002 07:20:44.055662  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:44.055736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.059917  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:44.059997  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:44.097033  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:44.097067  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.097072  346554 cri.go:89] found id: ""
	I1002 07:20:44.097079  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:44.097147  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.101257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.105790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:44.105890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:44.134184  346554 cri.go:89] found id: ""
	I1002 07:20:44.134213  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.134222  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:44.134229  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:44.134316  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:44.172910  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.172972  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.172992  346554 cri.go:89] found id: ""
	I1002 07:20:44.173019  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:44.173087  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.177020  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.181101  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:44.181189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:44.210050  346554 cri.go:89] found id: ""
	I1002 07:20:44.210072  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.210081  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:44.210088  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:44.210148  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:44.236942  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.236966  346554 cri.go:89] found id: ""
	I1002 07:20:44.236975  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:44.237032  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.240886  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:44.240968  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:44.267437  346554 cri.go:89] found id: ""
	I1002 07:20:44.267471  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.267482  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:44.267498  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:44.267522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.311617  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:44.311650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.371464  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:44.371502  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.401657  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:44.401685  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.429428  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:44.429458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.457332  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:44.457370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:44.542400  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:44.542441  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:44.576729  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:44.576808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:44.671950  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:44.671991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.688074  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:44.688102  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:44.772308  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:44.772331  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:44.772344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.326275  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:47.337461  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:47.337588  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:47.370813  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.370885  346554 cri.go:89] found id: ""
	I1002 07:20:47.370909  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:47.370985  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.375983  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:47.376102  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:47.408952  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.409021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.409046  346554 cri.go:89] found id: ""
	I1002 07:20:47.409075  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:47.409142  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.412894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.416604  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:47.416678  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:47.443724  346554 cri.go:89] found id: ""
	I1002 07:20:47.443746  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.443755  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:47.443761  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:47.443825  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:47.472814  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.472835  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.472840  346554 cri.go:89] found id: ""
	I1002 07:20:47.472848  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:47.472910  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.476853  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.481052  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:47.481125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:47.527292  346554 cri.go:89] found id: ""
	I1002 07:20:47.527316  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.527325  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:47.527331  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:47.527396  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:47.557465  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.557493  346554 cri.go:89] found id: ""
	I1002 07:20:47.557502  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:47.557573  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.561605  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:47.561776  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:47.592217  346554 cri.go:89] found id: ""
	I1002 07:20:47.592251  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.592261  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:47.592270  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:47.592282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:47.609667  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:47.609697  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.670961  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:47.670999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.701512  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:47.701543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.730463  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:47.730493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:47.813379  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:47.813403  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:47.813417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.839632  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:47.839663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.890767  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:47.890807  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.931484  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:47.931519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:48.013592  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:48.013683  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:48.048341  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:48.048371  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:50.660679  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:50.672098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:50.672208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:50.698977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.699002  346554 cri.go:89] found id: ""
	I1002 07:20:50.699012  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:50.699155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.703120  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:50.703197  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:50.731004  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:50.731030  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.731035  346554 cri.go:89] found id: ""
	I1002 07:20:50.731043  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:50.731134  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.735170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.739036  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:50.739228  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:50.765233  346554 cri.go:89] found id: ""
	I1002 07:20:50.765257  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.765267  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:50.765276  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:50.765337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:50.798825  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:50.798846  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:50.798851  346554 cri.go:89] found id: ""
	I1002 07:20:50.798858  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:50.798922  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.803023  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.806604  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:50.806684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:50.834561  346554 cri.go:89] found id: ""
	I1002 07:20:50.834595  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.834605  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:50.834612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:50.834685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:50.862616  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:50.862640  346554 cri.go:89] found id: ""
	I1002 07:20:50.862649  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:50.862719  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.866512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:50.866591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:50.894801  346554 cri.go:89] found id: ""
	I1002 07:20:50.894874  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.894898  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:50.894927  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:50.894970  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.922014  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:50.922093  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.963158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:50.963238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:51.041253  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:51.041298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:51.078068  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:51.078373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:51.109345  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:51.109379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:51.143553  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:51.143586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:51.160251  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:51.160287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:51.232331  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:51.232357  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:51.232370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:51.284859  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:51.284891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:51.366726  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:51.366764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:53.965349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:53.977241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:53.977365  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:54.007342  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.007370  346554 cri.go:89] found id: ""
	I1002 07:20:54.007379  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:54.007452  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.014154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:54.014243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:54.042738  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.042761  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.042767  346554 cri.go:89] found id: ""
	I1002 07:20:54.042787  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:54.042849  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.047324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.052426  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:54.052514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:54.092137  346554 cri.go:89] found id: ""
	I1002 07:20:54.092162  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.092171  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:54.092177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:54.092245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:54.123873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.123895  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.123900  346554 cri.go:89] found id: ""
	I1002 07:20:54.123908  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:54.123966  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.128307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.132643  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:54.132764  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:54.167072  346554 cri.go:89] found id: ""
	I1002 07:20:54.167173  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.167197  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:54.167223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:54.167317  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:54.201096  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.201124  346554 cri.go:89] found id: ""
	I1002 07:20:54.201133  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:54.201192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.205200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:54.205319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:54.232346  346554 cri.go:89] found id: ""
	I1002 07:20:54.232375  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.232384  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:54.232394  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:54.232424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:54.307053  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:54.307076  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:54.307120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.339765  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:54.339797  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.389419  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:54.389463  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.427898  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:54.427934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.459945  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:54.459979  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.495013  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:54.495049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:54.593488  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:54.593523  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:54.699166  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:54.699248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:54.715185  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:54.715217  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.790047  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:54.790081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.332703  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:57.343440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:57.343508  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:57.371159  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:57.371224  346554 cri.go:89] found id: ""
	I1002 07:20:57.371248  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:57.371325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.376379  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:57.376455  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:57.403394  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:57.403417  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:57.403423  346554 cri.go:89] found id: ""
	I1002 07:20:57.403431  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:57.403486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.407238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.410942  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:57.411033  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:57.438995  346554 cri.go:89] found id: ""
	I1002 07:20:57.439020  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.439029  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:57.439036  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:57.439133  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:57.471614  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.471639  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:57.471644  346554 cri.go:89] found id: ""
	I1002 07:20:57.471656  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:57.471714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.475670  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.479817  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:57.479927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:57.514129  346554 cri.go:89] found id: ""
	I1002 07:20:57.514152  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.514160  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:57.514166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:57.514229  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:57.540930  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.540954  346554 cri.go:89] found id: ""
	I1002 07:20:57.540963  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:57.541019  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.545166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:57.545246  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:57.580607  346554 cri.go:89] found id: ""
	I1002 07:20:57.580633  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.580643  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:57.580653  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:57.580682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:57.662349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:57.662389  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:57.761863  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:57.761900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.830325  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:57.830366  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.856569  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:57.856598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.888135  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:57.888164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:57.906242  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:57.906270  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:57.976993  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:57.977018  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:57.977033  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:58.011287  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:58.011323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:58.063746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:58.063782  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:58.114504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:58.114539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.655161  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:00.666760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:00.666847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:00.699194  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:00.699218  346554 cri.go:89] found id: ""
	I1002 07:21:00.699227  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:00.699283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.703475  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:00.703551  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:00.730837  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:00.730862  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:00.730867  346554 cri.go:89] found id: ""
	I1002 07:21:00.730874  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:00.730933  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.734900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.738704  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:00.738777  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:00.765809  346554 cri.go:89] found id: ""
	I1002 07:21:00.765832  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.765841  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:00.765847  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:00.765903  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:00.806888  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:00.806911  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.806916  346554 cri.go:89] found id: ""
	I1002 07:21:00.806924  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:00.806982  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.810980  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.815454  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:00.815527  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:00.843377  346554 cri.go:89] found id: ""
	I1002 07:21:00.843403  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.843413  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:00.843419  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:00.843480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:00.870064  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:00.870084  346554 cri.go:89] found id: ""
	I1002 07:21:00.870094  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:00.870150  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.874067  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:00.874142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:00.912375  346554 cri.go:89] found id: ""
	I1002 07:21:00.912400  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.912409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:00.912419  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:00.912437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:01.010660  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:01.010703  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:01.027564  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:01.027589  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:01.108980  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:01.109003  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:01.109017  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:01.140899  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:01.140925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:01.201677  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:01.201719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:01.249485  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:01.249516  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:01.310648  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:01.310682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:01.339591  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:01.339668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:01.368293  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:01.368363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:01.451526  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:01.451565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:03.985004  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:03.995665  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:03.995732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:04.038756  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.038786  346554 cri.go:89] found id: ""
	I1002 07:21:04.038796  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:04.038863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.042734  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:04.042813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:04.080960  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.080984  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.080990  346554 cri.go:89] found id: ""
	I1002 07:21:04.080998  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:04.081055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.085045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.088904  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:04.088984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:04.116470  346554 cri.go:89] found id: ""
	I1002 07:21:04.116495  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.116504  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:04.116511  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:04.116568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:04.143301  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.143324  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:04.143330  346554 cri.go:89] found id: ""
	I1002 07:21:04.143336  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:04.143392  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.149220  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.156754  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:04.156875  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:04.186088  346554 cri.go:89] found id: ""
	I1002 07:21:04.186115  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.186125  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:04.186131  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:04.186222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:04.213953  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.213978  346554 cri.go:89] found id: ""
	I1002 07:21:04.213987  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:04.214074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.220236  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:04.220339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:04.249797  346554 cri.go:89] found id: ""
	I1002 07:21:04.249825  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.249834  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:04.249876  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:04.249893  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:04.334427  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:04.334464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:04.365264  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:04.365294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:04.467641  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:04.467693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.495501  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:04.495532  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.553841  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:04.553879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.590884  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:04.590912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.618124  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:04.618157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:04.634781  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:04.634812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:04.712412  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:04.712440  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:04.712458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.772367  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:04.772405  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.313327  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:07.324335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:07.324410  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:07.352343  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.352367  346554 cri.go:89] found id: ""
	I1002 07:21:07.352376  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:07.352456  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.356634  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:07.356705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:07.384754  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.384778  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.384783  346554 cri.go:89] found id: ""
	I1002 07:21:07.384791  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:07.384871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.388840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.392572  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:07.392672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:07.418573  346554 cri.go:89] found id: ""
	I1002 07:21:07.418605  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.418615  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:07.418622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:07.418681  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:07.450415  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:07.450439  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.450445  346554 cri.go:89] found id: ""
	I1002 07:21:07.450466  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:07.450529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.454971  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.459463  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:07.459539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:07.488692  346554 cri.go:89] found id: ""
	I1002 07:21:07.488722  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.488730  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:07.488737  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:07.488799  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:07.520325  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.520350  346554 cri.go:89] found id: ""
	I1002 07:21:07.520359  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:07.520421  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.524256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:07.524330  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:07.549519  346554 cri.go:89] found id: ""
	I1002 07:21:07.549540  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.549548  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:07.549558  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:07.549569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:07.643274  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:07.643315  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:07.716156  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:07.716179  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:07.716195  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.743950  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:07.743980  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:07.830226  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:07.830266  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:07.847230  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:07.847260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.875839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:07.875908  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.937408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:07.937448  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.974391  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:07.974428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:08.044504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:08.044544  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:08.085844  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:08.085875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:10.619391  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:10.631035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:10.631208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:10.664959  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:10.664983  346554 cri.go:89] found id: ""
	I1002 07:21:10.664992  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:10.665070  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.668812  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:10.668884  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:10.695400  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:10.695424  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:10.695430  346554 cri.go:89] found id: ""
	I1002 07:21:10.695438  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:10.695526  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.699317  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.703430  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:10.703524  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:10.728859  346554 cri.go:89] found id: ""
	I1002 07:21:10.728883  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.728892  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:10.728898  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:10.728974  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:10.754882  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:10.754905  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.754911  346554 cri.go:89] found id: ""
	I1002 07:21:10.754918  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:10.754984  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.758686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.762139  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:10.762248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:10.787999  346554 cri.go:89] found id: ""
	I1002 07:21:10.788067  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.788092  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:10.788115  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:10.788204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:10.814729  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:10.814803  346554 cri.go:89] found id: ""
	I1002 07:21:10.814825  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:10.814914  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.818388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:10.818483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:10.845398  346554 cri.go:89] found id: ""
	I1002 07:21:10.845424  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.845433  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:10.845443  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:10.845482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.873199  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:10.873225  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:10.951572  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:10.951609  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:11.051035  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:11.051118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:11.130878  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:11.130909  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:11.130924  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:11.156885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:11.156920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:11.211573  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:11.211615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:11.272703  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:11.272742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:11.301304  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:11.301336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:11.342833  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:11.342861  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:11.360176  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:11.360204  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.902061  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:13.915871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:13.915935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:13.954412  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:13.954439  346554 cri.go:89] found id: ""
	I1002 07:21:13.954448  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:13.954513  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.959571  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:13.959655  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:13.994709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:13.994729  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.994735  346554 cri.go:89] found id: ""
	I1002 07:21:13.994743  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:13.994797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.999427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.003663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:14.003749  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:14.042653  346554 cri.go:89] found id: ""
	I1002 07:21:14.042680  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.042690  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:14.042696  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:14.042757  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:14.087595  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.087615  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.087620  346554 cri.go:89] found id: ""
	I1002 07:21:14.087628  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:14.087688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.092427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.096855  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:14.096920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:14.126816  346554 cri.go:89] found id: ""
	I1002 07:21:14.126843  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.126852  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:14.126858  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:14.126918  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:14.155318  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:14.155339  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.155344  346554 cri.go:89] found id: ""
	I1002 07:21:14.155351  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:14.155407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.159934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.164569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:14.164634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:14.209412  346554 cri.go:89] found id: ""
	I1002 07:21:14.209437  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.209449  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:14.209459  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:14.209471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:14.225995  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:14.226022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:14.263998  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:14.264027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:14.360121  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:14.360159  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:14.407199  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:14.407234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.434782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:14.434814  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:14.521080  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:14.521121  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:14.593104  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:14.593134  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:14.699269  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:14.699308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:14.786512  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:14.786535  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:14.786548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.869065  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:14.869109  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.900362  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:14.900454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.430222  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:17.442136  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:17.442212  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:17.468618  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:17.468642  346554 cri.go:89] found id: ""
	I1002 07:21:17.468664  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:17.468722  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.472407  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:17.472483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:17.500441  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:17.500462  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.500468  346554 cri.go:89] found id: ""
	I1002 07:21:17.500475  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:17.500534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.504574  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.511111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:17.511190  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:17.539180  346554 cri.go:89] found id: ""
	I1002 07:21:17.539208  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.539217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:17.539224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:17.539283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:17.567616  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.567641  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:17.567647  346554 cri.go:89] found id: ""
	I1002 07:21:17.567654  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:17.567710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.571727  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.575519  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:17.575603  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:17.601045  346554 cri.go:89] found id: ""
	I1002 07:21:17.601070  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.601079  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:17.601086  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:17.601143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:17.628358  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.628379  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:17.628384  346554 cri.go:89] found id: ""
	I1002 07:21:17.628391  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:17.628479  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.632534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.636208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:17.636286  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:17.662364  346554 cri.go:89] found id: ""
	I1002 07:21:17.662389  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.662398  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:17.662408  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:17.662419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:17.756609  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:17.756643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:17.772784  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:17.772821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:17.854603  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:17.854625  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:17.854639  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.890480  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:17.890513  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.955720  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:17.955755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.986877  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:17.986906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:18.065618  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:18.065659  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:18.111257  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:18.111287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:18.141121  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:18.141151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:18.202491  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:18.202530  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:18.232094  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:18.232124  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.762758  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:20.773630  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:20.773708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:20.806503  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:20.806533  346554 cri.go:89] found id: ""
	I1002 07:21:20.806542  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:20.806599  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.810265  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:20.810338  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:20.839055  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:20.839105  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:20.839111  346554 cri.go:89] found id: ""
	I1002 07:21:20.839119  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:20.839176  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.843029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.846663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:20.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:20.875148  346554 cri.go:89] found id: ""
	I1002 07:21:20.875173  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.875183  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:20.875190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:20.875249  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:20.907677  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:20.907701  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:20.907707  346554 cri.go:89] found id: ""
	I1002 07:21:20.907715  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:20.907772  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.911686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.915632  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:20.915707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:20.941873  346554 cri.go:89] found id: ""
	I1002 07:21:20.941899  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.941908  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:20.941915  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:20.941975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:20.973490  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:20.973515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.973521  346554 cri.go:89] found id: ""
	I1002 07:21:20.973530  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:20.973585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.977414  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.981138  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:20.981213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:21.013505  346554 cri.go:89] found id: ""
	I1002 07:21:21.013533  346554 logs.go:282] 0 containers: []
	W1002 07:21:21.013543  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:21.013553  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:21.013565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:21.047930  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:21.047959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:21.144461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:21.144498  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:21.218444  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:21.218469  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:21.218482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:21.244979  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:21.245010  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:21.273907  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:21.273940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:21.304310  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:21.304341  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:21.383311  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:21.383390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:21.418944  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:21.418976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:21.437126  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:21.437154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:21.499338  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:21.499373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:21.541388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:21.541424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.103318  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:24.114524  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:24.114645  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:24.142263  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.142286  346554 cri.go:89] found id: ""
	I1002 07:21:24.142295  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:24.142357  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.146924  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:24.146998  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:24.174920  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.174945  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.174950  346554 cri.go:89] found id: ""
	I1002 07:21:24.174958  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:24.175015  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.179961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.183781  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:24.183859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:24.213946  346554 cri.go:89] found id: ""
	I1002 07:21:24.213969  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.213978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:24.213985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:24.214044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:24.240875  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.240898  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.240903  346554 cri.go:89] found id: ""
	I1002 07:21:24.240910  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:24.240967  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.244817  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.248504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:24.248601  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:24.277554  346554 cri.go:89] found id: ""
	I1002 07:21:24.277579  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.277588  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:24.277595  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:24.277675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:24.308411  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.308507  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.308518  346554 cri.go:89] found id: ""
	I1002 07:21:24.308526  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:24.308585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.312514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.316209  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:24.316322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:24.352013  346554 cri.go:89] found id: ""
	I1002 07:21:24.352037  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.352047  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:24.352057  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:24.352070  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.392888  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:24.392926  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.422136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:24.422162  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:24.522148  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:24.522189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:24.559761  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:24.559789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:24.635577  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:24.635658  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:24.635688  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.664008  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:24.664038  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.716205  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:24.716243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.776422  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:24.776465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.812576  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:24.812606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.850011  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:24.850051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:24.957619  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:24.957658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.474346  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:27.486924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:27.486999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:27.527387  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:27.527411  346554 cri.go:89] found id: ""
	I1002 07:21:27.527419  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:27.527481  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.531347  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:27.531425  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:27.557184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:27.557209  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.557216  346554 cri.go:89] found id: ""
	I1002 07:21:27.557226  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:27.557285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.561185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.564887  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:27.564964  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:27.593958  346554 cri.go:89] found id: ""
	I1002 07:21:27.593984  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.593993  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:27.594000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:27.594070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:27.624297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:27.624321  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:27.624325  346554 cri.go:89] found id: ""
	I1002 07:21:27.624332  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:27.624390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.628548  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.632313  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:27.632401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:27.658827  346554 cri.go:89] found id: ""
	I1002 07:21:27.658850  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.658858  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:27.658876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:27.658942  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:27.687346  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.687422  346554 cri.go:89] found id: ""
	I1002 07:21:27.687438  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:27.687516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.691438  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:27.691563  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:27.716933  346554 cri.go:89] found id: ""
	I1002 07:21:27.716959  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.716969  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:27.716979  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:27.717019  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:27.817783  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:27.817831  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.857490  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:27.857525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.885125  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:27.885157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:27.918095  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:27.918133  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.933988  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:27.934018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:28.004686  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:28.004719  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:28.004734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:28.034260  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:28.034287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:28.093230  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:28.093269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:28.164138  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:28.164177  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:28.195157  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:28.195188  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:30.778568  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:30.789765  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:30.789833  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:30.825174  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:30.825194  346554 cri.go:89] found id: ""
	I1002 07:21:30.825202  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:30.825257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.829729  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:30.829796  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:30.856611  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:30.856632  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:30.856637  346554 cri.go:89] found id: ""
	I1002 07:21:30.856644  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:30.856701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.860561  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.864279  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:30.864353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:30.891192  346554 cri.go:89] found id: ""
	I1002 07:21:30.891217  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.891257  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:30.891269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:30.891353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:30.918873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:30.918892  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:30.918897  346554 cri.go:89] found id: ""
	I1002 07:21:30.918904  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:30.918965  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.922949  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.926830  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:30.926928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:30.953030  346554 cri.go:89] found id: ""
	I1002 07:21:30.953059  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.953068  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:30.953074  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:30.953131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:30.980458  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:30.980480  346554 cri.go:89] found id: ""
	I1002 07:21:30.980489  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:30.980547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.984323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:30.984450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:31.026334  346554 cri.go:89] found id: ""
	I1002 07:21:31.026360  346554 logs.go:282] 0 containers: []
	W1002 07:21:31.026370  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:31.026380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:31.026416  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:31.058391  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:31.058420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:31.116004  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:31.116040  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:31.151060  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:31.151099  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:31.231368  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:31.231406  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:31.332798  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:31.332835  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:31.413678  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:31.413705  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:31.413717  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:31.461265  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:31.461299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:31.534946  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:31.534986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:31.562600  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:31.562629  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:31.592876  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:31.592906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.110078  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:34.121201  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:34.121271  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:34.148533  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.148554  346554 cri.go:89] found id: ""
	I1002 07:21:34.148562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:34.148621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.152503  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:34.152585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:34.181027  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.181050  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.181056  346554 cri.go:89] found id: ""
	I1002 07:21:34.181063  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:34.181117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.185002  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.189485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:34.189560  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:34.215599  346554 cri.go:89] found id: ""
	I1002 07:21:34.215625  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.215634  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:34.215641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:34.215699  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:34.241734  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.241763  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.241768  346554 cri.go:89] found id: ""
	I1002 07:21:34.241776  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:34.241832  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.245545  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.248974  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:34.249050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:34.276023  346554 cri.go:89] found id: ""
	I1002 07:21:34.276049  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.276059  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:34.276072  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:34.276132  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:34.303384  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.303407  346554 cri.go:89] found id: ""
	I1002 07:21:34.303415  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:34.303472  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.307469  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:34.307539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:34.340234  346554 cri.go:89] found id: ""
	I1002 07:21:34.340261  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.340271  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:34.340281  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:34.340293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.356522  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:34.356550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.394796  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:34.394825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.443502  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:34.443538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.474055  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:34.474081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:34.555556  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:34.555637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:34.658066  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:34.658101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:34.733631  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:34.733651  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:34.733665  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.784032  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:34.784068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.847736  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:34.847771  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.875075  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:34.875172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:37.408950  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:37.421164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:37.421273  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:37.452410  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.452439  346554 cri.go:89] found id: ""
	I1002 07:21:37.452449  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:37.452505  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.456325  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:37.456445  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:37.486317  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.486340  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:37.486346  346554 cri.go:89] found id: ""
	I1002 07:21:37.486353  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:37.486451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.490342  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.494027  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:37.494104  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:37.527183  346554 cri.go:89] found id: ""
	I1002 07:21:37.527257  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.527281  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:37.527305  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:37.527403  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:37.553164  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:37.553189  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:37.553194  346554 cri.go:89] found id: ""
	I1002 07:21:37.553202  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:37.553263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.557191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.560812  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:37.560909  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:37.592768  346554 cri.go:89] found id: ""
	I1002 07:21:37.592837  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.592861  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:37.592887  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:37.592973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:37.619244  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:37.619275  346554 cri.go:89] found id: ""
	I1002 07:21:37.619285  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:37.619382  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.622994  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:37.623067  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:37.654796  346554 cri.go:89] found id: ""
	I1002 07:21:37.654833  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.654843  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:37.654853  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:37.654864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:37.735865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:37.735903  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:37.829667  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:37.829705  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:37.906371  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:37.906396  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:37.906409  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.931859  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:37.931891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.982107  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:37.982141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:38.026363  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:38.026402  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:38.097347  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:38.097387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:38.129911  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:38.129940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:38.174203  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:38.174233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:38.192324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:38.192356  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.723244  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:40.733967  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:40.734044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:40.761160  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:40.761180  346554 cri.go:89] found id: ""
	I1002 07:21:40.761196  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:40.761257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.764997  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:40.765082  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:40.793331  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:40.793357  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:40.793376  346554 cri.go:89] found id: ""
	I1002 07:21:40.793385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:40.793441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.799890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.803764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:40.803836  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:40.834660  346554 cri.go:89] found id: ""
	I1002 07:21:40.834686  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.834696  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:40.834702  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:40.834765  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:40.866063  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:40.866087  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:40.866093  346554 cri.go:89] found id: ""
	I1002 07:21:40.866103  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:40.866168  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.870407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.873946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:40.874058  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:40.908301  346554 cri.go:89] found id: ""
	I1002 07:21:40.908367  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.908391  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:40.908417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:40.908494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:40.937896  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.937966  346554 cri.go:89] found id: ""
	I1002 07:21:40.937990  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:40.938080  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.941880  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:40.941952  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:40.967147  346554 cri.go:89] found id: ""
	I1002 07:21:40.967174  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.967190  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:40.967226  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:40.967238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:41.061039  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:41.061077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:41.080254  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:41.080282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:41.108521  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:41.108547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:41.162117  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:41.162154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:41.233238  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:41.233276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:41.260363  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:41.260392  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:41.333767  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:41.333840  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:41.333863  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:41.370518  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:41.370556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:41.399620  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:41.399646  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:41.485257  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:41.485299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.031564  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:44.043423  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:44.043501  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:44.077366  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.077391  346554 cri.go:89] found id: ""
	I1002 07:21:44.077400  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:44.077473  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.082216  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:44.082297  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:44.114495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.114564  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.114585  346554 cri.go:89] found id: ""
	I1002 07:21:44.114612  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:44.114701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.118699  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.122876  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:44.122955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:44.161976  346554 cri.go:89] found id: ""
	I1002 07:21:44.162003  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.162015  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:44.162021  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:44.162120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:44.190658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.190682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.190688  346554 cri.go:89] found id: ""
	I1002 07:21:44.190695  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:44.190800  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.194562  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.198424  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:44.198514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:44.224096  346554 cri.go:89] found id: ""
	I1002 07:21:44.224158  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.224181  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:44.224207  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:44.224284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:44.251545  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.251569  346554 cri.go:89] found id: ""
	I1002 07:21:44.251581  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:44.251639  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.255354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:44.255428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:44.282373  346554 cri.go:89] found id: ""
	I1002 07:21:44.282400  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.282409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:44.282419  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:44.282431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.308028  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:44.308062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.363685  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:44.363723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.396318  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:44.396349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.442337  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:44.442370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:44.546740  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:44.546778  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:44.562701  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:44.562734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:44.638865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:44.638901  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:44.638934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.675050  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:44.675117  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.759066  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:44.759108  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.789536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:44.789569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:47.372747  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:47.384470  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:47.384538  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:47.411456  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.411476  346554 cri.go:89] found id: ""
	I1002 07:21:47.411484  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:47.411538  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.415979  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:47.416052  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:47.441980  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.442000  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.442005  346554 cri.go:89] found id: ""
	I1002 07:21:47.442012  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:47.442071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.446178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.449820  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:47.449889  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:47.480516  346554 cri.go:89] found id: ""
	I1002 07:21:47.480597  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.480614  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:47.480622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:47.480700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:47.512233  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.512299  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.512321  346554 cri.go:89] found id: ""
	I1002 07:21:47.512347  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:47.512447  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.517986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.522484  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:47.522599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:47.554391  346554 cri.go:89] found id: ""
	I1002 07:21:47.554459  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.554483  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:47.554509  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:47.554608  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:47.581519  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.581586  346554 cri.go:89] found id: ""
	I1002 07:21:47.581608  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:47.581710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.585885  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:47.585999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:47.615242  346554 cri.go:89] found id: ""
	I1002 07:21:47.615272  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.615281  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:47.615291  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:47.615322  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:47.635364  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:47.635394  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:47.712651  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:47.712678  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:47.712694  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.743506  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:47.743536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.811148  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:47.811227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.870291  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:47.870324  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.910224  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:47.910257  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.939069  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:47.939155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.964969  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:47.965008  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:48.043117  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:48.043158  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:48.088315  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:48.088344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:50.689757  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:50.700824  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:50.700893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:50.728143  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:50.728166  346554 cri.go:89] found id: ""
	I1002 07:21:50.728175  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:50.728244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.732333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:50.732406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:50.757855  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:50.757880  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:50.757886  346554 cri.go:89] found id: ""
	I1002 07:21:50.757905  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:50.757972  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.762029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.765976  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:50.766050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:50.799256  346554 cri.go:89] found id: ""
	I1002 07:21:50.799278  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.799287  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:50.799293  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:50.799360  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:50.831950  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:50.831974  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:50.831981  346554 cri.go:89] found id: ""
	I1002 07:21:50.831988  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:50.832045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.836319  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.840585  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:50.840668  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:50.870390  346554 cri.go:89] found id: ""
	I1002 07:21:50.870416  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.870428  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:50.870436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:50.870502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:50.900076  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:50.900103  346554 cri.go:89] found id: ""
	I1002 07:21:50.900112  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:50.900193  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.904363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:50.904461  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:50.932728  346554 cri.go:89] found id: ""
	I1002 07:21:50.932755  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.932775  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:50.932786  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:50.932798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:51.001280  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:51.001310  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:51.001326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:51.032692  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:51.032721  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:51.086523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:51.086563  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:51.151924  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:51.151959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:51.181936  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:51.181965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:51.209313  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:51.209340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:51.246072  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:51.246103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:51.328956  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:51.328991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:51.362658  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:51.362692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:51.461576  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:51.461615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:53.981504  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:53.992767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:53.992841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:54.027324  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.027347  346554 cri.go:89] found id: ""
	I1002 07:21:54.027356  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:54.027422  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.031946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:54.032021  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:54.059889  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.059911  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.059916  346554 cri.go:89] found id: ""
	I1002 07:21:54.059924  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:54.059983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.064071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.068437  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:54.068516  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:54.100879  346554 cri.go:89] found id: ""
	I1002 07:21:54.100906  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.100917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:54.100923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:54.101019  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:54.127769  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.127792  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.127798  346554 cri.go:89] found id: ""
	I1002 07:21:54.127806  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:54.127871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.131837  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.135428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:54.135507  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:54.163909  346554 cri.go:89] found id: ""
	I1002 07:21:54.163934  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.163943  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:54.163950  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:54.164008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:54.195746  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.195778  346554 cri.go:89] found id: ""
	I1002 07:21:54.195787  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:54.195846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.200638  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:54.200733  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:54.228414  346554 cri.go:89] found id: ""
	I1002 07:21:54.228492  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.228518  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:54.228534  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:54.228548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:54.261854  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:54.261884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:54.337793  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:54.337814  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:54.337828  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.374142  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:54.374176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.444394  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:54.444430  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.487047  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:54.487074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.531639  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:54.531667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:54.639157  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:54.639196  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:54.655755  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:54.655784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.685950  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:54.685978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.753837  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:54.753879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.341138  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:57.351729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:57.351806  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:57.383937  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.383962  346554 cri.go:89] found id: ""
	I1002 07:21:57.383970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:57.384030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.387697  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:57.387774  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:57.413348  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:57.413372  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.413377  346554 cri.go:89] found id: ""
	I1002 07:21:57.413385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:57.413451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.417397  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.420826  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:57.420904  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:57.453888  346554 cri.go:89] found id: ""
	I1002 07:21:57.453913  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.453922  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:57.453928  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:57.453986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:57.483451  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:57.483472  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:57.483476  346554 cri.go:89] found id: ""
	I1002 07:21:57.483483  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:57.483541  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.487407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.490932  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:57.491034  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:57.526291  346554 cri.go:89] found id: ""
	I1002 07:21:57.526318  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.526327  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:57.526334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:57.526391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:57.554217  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.554297  346554 cri.go:89] found id: ""
	I1002 07:21:57.554320  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:57.554415  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.558417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:57.558494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:57.590610  346554 cri.go:89] found id: ""
	I1002 07:21:57.590632  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.590640  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:57.590649  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:57.590662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:57.686336  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:57.686376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.717511  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:57.717543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.754283  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:57.754326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.785227  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:57.785258  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.869305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:57.869342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:57.909139  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:57.909171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:57.926456  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:57.926487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:57.995639  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:57.995664  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:57.995679  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:58.058207  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:58.058248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:58.125241  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:58.125284  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.654876  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:00.665832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:00.665905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:00.693874  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.693939  346554 cri.go:89] found id: ""
	I1002 07:22:00.693962  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:00.694054  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.697859  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:00.697934  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:00.725245  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.725270  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:00.725276  346554 cri.go:89] found id: ""
	I1002 07:22:00.725284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:00.725364  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.729223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.732817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:00.732935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:00.758839  346554 cri.go:89] found id: ""
	I1002 07:22:00.758906  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.758929  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:00.758953  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:00.759039  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:00.799071  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:00.799149  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.799155  346554 cri.go:89] found id: ""
	I1002 07:22:00.799162  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:00.799234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.803167  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.806750  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:00.806845  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:00.839560  346554 cri.go:89] found id: ""
	I1002 07:22:00.839587  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.839596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:00.839602  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:00.839660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:00.870224  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:00.870255  346554 cri.go:89] found id: ""
	I1002 07:22:00.870263  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:00.870336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.874393  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:00.874495  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:00.912075  346554 cri.go:89] found id: ""
	I1002 07:22:00.912105  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.912114  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:00.912124  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:00.912136  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.937824  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:00.937853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.995416  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:00.995451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:01.066170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:01.066205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:01.097565  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:01.097596  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:01.177599  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:01.177641  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:01.279014  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:01.279051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:01.294984  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:01.295013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:01.367956  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:01.368020  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:01.368050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:01.410820  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:01.410865  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:01.438796  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:01.438821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:03.971937  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:03.983881  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:03.983958  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:04.015026  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.015047  346554 cri.go:89] found id: ""
	I1002 07:22:04.015055  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:04.015146  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.019432  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:04.019511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:04.047606  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.047638  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.047644  346554 cri.go:89] found id: ""
	I1002 07:22:04.047651  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:04.047716  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.052312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.055940  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:04.056013  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:04.084749  346554 cri.go:89] found id: ""
	I1002 07:22:04.084774  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.084784  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:04.084791  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:04.084858  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:04.115693  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.115718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:04.115724  346554 cri.go:89] found id: ""
	I1002 07:22:04.115732  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:04.115791  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.119451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.123387  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:04.123509  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:04.160601  346554 cri.go:89] found id: ""
	I1002 07:22:04.160634  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.160643  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:04.160650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:04.160709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:04.186914  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.186975  346554 cri.go:89] found id: ""
	I1002 07:22:04.187000  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:04.187074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.190897  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:04.190972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:04.217225  346554 cri.go:89] found id: ""
	I1002 07:22:04.217292  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.217306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:04.217320  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:04.217332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:04.248848  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:04.248876  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:04.265771  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:04.265801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:04.331344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:04.331380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:04.331395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.358729  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:04.358757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.416966  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:04.417007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.455261  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:04.455298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.483009  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:04.483037  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:04.563547  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:04.563585  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:04.668263  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:04.668301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.744129  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:04.744172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.275239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:07.285854  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:07.285925  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:07.312977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.312997  346554 cri.go:89] found id: ""
	I1002 07:22:07.313005  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:07.313060  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.316845  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:07.316920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:07.346852  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.346874  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.346879  346554 cri.go:89] found id: ""
	I1002 07:22:07.346887  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:07.346943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.350635  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.354162  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:07.354227  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:07.383691  346554 cri.go:89] found id: ""
	I1002 07:22:07.383716  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.383725  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:07.383732  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:07.383790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:07.412740  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.412762  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.412768  346554 cri.go:89] found id: ""
	I1002 07:22:07.412775  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:07.412874  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.416633  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.420294  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:07.420370  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:07.448452  346554 cri.go:89] found id: ""
	I1002 07:22:07.448481  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.448496  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:07.448503  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:07.448573  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:07.478691  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:07.478759  346554 cri.go:89] found id: ""
	I1002 07:22:07.478782  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:07.478877  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.484491  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:07.484566  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:07.526882  346554 cri.go:89] found id: ""
	I1002 07:22:07.526907  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.526916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:07.526926  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:07.526940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:07.543682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:07.543709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:07.622365  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:07.622386  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:07.622401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.688381  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:07.688417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.716317  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:07.716368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:07.765160  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:07.765187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:07.863442  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:07.863480  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.890947  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:07.890975  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.931413  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:07.931445  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.994034  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:07.994116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:08.029432  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:08.029459  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:10.612654  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:10.624226  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:10.624295  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:10.651797  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:10.651820  346554 cri.go:89] found id: ""
	I1002 07:22:10.651829  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:10.651887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.655778  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:10.655861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:10.682781  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:10.682804  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:10.682810  346554 cri.go:89] found id: ""
	I1002 07:22:10.682817  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:10.682873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.686610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.690176  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:10.690248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:10.716340  346554 cri.go:89] found id: ""
	I1002 07:22:10.716365  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.716374  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:10.716380  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:10.716450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:10.744916  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:10.744941  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:10.744947  346554 cri.go:89] found id: ""
	I1002 07:22:10.744954  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:10.745009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.748825  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.752367  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:10.752459  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:10.778426  346554 cri.go:89] found id: ""
	I1002 07:22:10.778491  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.778519  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:10.778545  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:10.778634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:10.816930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:10.816956  346554 cri.go:89] found id: ""
	I1002 07:22:10.816965  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:10.817021  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.820675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:10.820748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:10.848624  346554 cri.go:89] found id: ""
	I1002 07:22:10.848692  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.848716  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:10.848747  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:10.848784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:10.949146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:10.949183  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:10.966424  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:10.966503  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:11.050571  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:11.050590  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:11.050607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:11.096274  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:11.096305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:11.163795  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:11.163833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:11.198136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:11.198167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:11.281776  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:11.281815  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:11.314298  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:11.314329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:11.346046  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:11.346074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:11.401509  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:11.401546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:13.937437  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:13.948853  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:13.948931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:13.978524  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:13.978546  346554 cri.go:89] found id: ""
	I1002 07:22:13.978562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:13.978622  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:13.983904  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:13.984002  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:14.018404  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.018427  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.018432  346554 cri.go:89] found id: ""
	I1002 07:22:14.018441  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:14.018501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.022898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.027485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:14.027580  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:14.067189  346554 cri.go:89] found id: ""
	I1002 07:22:14.067277  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.067293  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:14.067301  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:14.067380  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:14.098843  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:14.098868  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.098874  346554 cri.go:89] found id: ""
	I1002 07:22:14.098882  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:14.098938  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.103497  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.107744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:14.107820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:14.136768  346554 cri.go:89] found id: ""
	I1002 07:22:14.136797  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.136807  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:14.136813  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:14.136880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:14.163984  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.164055  346554 cri.go:89] found id: ""
	I1002 07:22:14.164079  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:14.164165  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.168259  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:14.168337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:14.201762  346554 cri.go:89] found id: ""
	I1002 07:22:14.201789  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.201799  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:14.201809  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:14.201822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.228036  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:14.228067  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:14.305247  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:14.305286  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:14.417180  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:14.417216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:14.434371  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:14.434404  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.494496  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:14.494534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.530240  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:14.530274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:14.565285  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:14.565312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:14.656059  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:14.656082  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:14.656096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:14.684431  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:14.684465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.720953  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:14.720987  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.291251  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:17.303244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:17.303315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:17.330183  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.330208  346554 cri.go:89] found id: ""
	I1002 07:22:17.330217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:17.330281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.334207  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:17.334281  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:17.363238  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.363263  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.363269  346554 cri.go:89] found id: ""
	I1002 07:22:17.363276  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:17.363331  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.367005  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.370719  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:17.370792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:17.397991  346554 cri.go:89] found id: ""
	I1002 07:22:17.398016  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.398026  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:17.398032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:17.398092  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:17.431537  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.431562  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.431568  346554 cri.go:89] found id: ""
	I1002 07:22:17.431575  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:17.431631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.435774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.439628  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:17.439701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:17.470573  346554 cri.go:89] found id: ""
	I1002 07:22:17.470598  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.470614  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:17.470621  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:17.470689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:17.496787  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:17.496813  346554 cri.go:89] found id: ""
	I1002 07:22:17.496822  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:17.496879  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.500676  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:17.500809  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:17.528111  346554 cri.go:89] found id: ""
	I1002 07:22:17.528136  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.528145  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:17.528155  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:17.528167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:17.629228  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:17.629269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:17.719781  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:17.719804  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:17.719818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.791077  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:17.791176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.835873  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:17.835907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.865669  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:17.865698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:17.947809  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:17.947851  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:17.966021  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:17.966054  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.993388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:17.993419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:18.067826  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:18.067915  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:18.098854  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:18.098928  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:20.640412  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:20.654177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:20.654280  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:20.689110  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:20.689138  346554 cri.go:89] found id: ""
	I1002 07:22:20.689146  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:20.689210  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.692968  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:20.693043  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:20.726246  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.726271  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:20.726276  346554 cri.go:89] found id: ""
	I1002 07:22:20.726284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:20.726340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.730329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.734406  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:20.734503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:20.762306  346554 cri.go:89] found id: ""
	I1002 07:22:20.762332  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.762341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:20.762348  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:20.762406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:20.801345  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:20.801370  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:20.801375  346554 cri.go:89] found id: ""
	I1002 07:22:20.801383  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:20.801461  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.805572  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.809363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:20.809439  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:20.839370  346554 cri.go:89] found id: ""
	I1002 07:22:20.839396  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.839405  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:20.839411  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:20.839487  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:20.866883  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:20.866908  346554 cri.go:89] found id: ""
	I1002 07:22:20.866918  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:20.866994  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.871482  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:20.871602  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:20.915272  346554 cri.go:89] found id: ""
	I1002 07:22:20.915297  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.915306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:20.915334  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:20.915354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.969984  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:20.970023  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:21.008389  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:21.008426  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:21.097527  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:21.097564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:21.131052  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:21.131112  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:21.250056  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:21.250095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:21.266497  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:21.266528  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:21.336488  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:21.336517  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:21.336534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:21.365447  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:21.365477  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:21.432439  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:21.432517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:21.464158  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:21.464186  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:23.993684  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:24.012128  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:24.012344  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:24.041820  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.041844  346554 cri.go:89] found id: ""
	I1002 07:22:24.041853  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:24.041913  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.045939  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:24.046012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:24.080951  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.080971  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.080977  346554 cri.go:89] found id: ""
	I1002 07:22:24.080984  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:24.081042  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.086379  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.090878  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:24.090956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:24.118754  346554 cri.go:89] found id: ""
	I1002 07:22:24.118793  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.118803  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:24.118809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:24.118876  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:24.162937  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.162960  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.162967  346554 cri.go:89] found id: ""
	I1002 07:22:24.162975  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:24.163041  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.167416  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.171521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:24.171612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:24.198740  346554 cri.go:89] found id: ""
	I1002 07:22:24.198764  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.198774  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:24.198780  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:24.198849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:24.226586  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.226607  346554 cri.go:89] found id: ""
	I1002 07:22:24.226616  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:24.226676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.230625  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:24.230701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:24.258053  346554 cri.go:89] found id: ""
	I1002 07:22:24.258089  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.258100  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:24.258110  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:24.258122  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:24.357393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:24.357431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:24.375359  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:24.375390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.444675  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:24.444714  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.484227  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:24.484262  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.512674  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:24.512707  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:24.597691  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:24.597712  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:24.597728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.628466  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:24.628492  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.706367  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:24.706408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.737446  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:24.737475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:24.822997  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:24.823036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:27.355482  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:27.366566  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:27.366636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:27.394804  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:27.394828  346554 cri.go:89] found id: ""
	I1002 07:22:27.394837  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:27.394901  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.398931  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:27.399000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:27.425553  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.425576  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.425582  346554 cri.go:89] found id: ""
	I1002 07:22:27.425590  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:27.425651  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.429400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.433140  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:27.433237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:27.463605  346554 cri.go:89] found id: ""
	I1002 07:22:27.463626  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.463635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:27.463642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:27.463701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:27.493043  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.493074  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.493080  346554 cri.go:89] found id: ""
	I1002 07:22:27.493087  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:27.493145  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.497072  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.500729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:27.500805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:27.531993  346554 cri.go:89] found id: ""
	I1002 07:22:27.532021  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.532031  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:27.532037  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:27.532097  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:27.559232  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.559310  346554 cri.go:89] found id: ""
	I1002 07:22:27.559329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:27.559400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.563624  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:27.563744  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:27.593254  346554 cri.go:89] found id: ""
	I1002 07:22:27.593281  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.593302  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:27.593313  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:27.593328  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.622961  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:27.622992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:27.700292  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:27.700315  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:27.700329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.760790  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:27.760830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.800937  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:27.800976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.879230  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:27.879273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.910457  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:27.910561  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:27.998247  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:27.998287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:28.039823  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:28.039856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:28.148384  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:28.148472  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:28.170086  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:28.170114  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.702644  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:30.713672  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:30.713748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:30.742461  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.742484  346554 cri.go:89] found id: ""
	I1002 07:22:30.742493  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:30.742553  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.746359  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:30.746446  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:30.777229  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:30.777256  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:30.777261  346554 cri.go:89] found id: ""
	I1002 07:22:30.777269  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:30.777345  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.781661  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.785300  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:30.785373  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:30.812435  346554 cri.go:89] found id: ""
	I1002 07:22:30.812465  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.812474  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:30.812481  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:30.812558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:30.839730  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:30.839752  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:30.839758  346554 cri.go:89] found id: ""
	I1002 07:22:30.839765  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:30.839851  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.843582  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.847332  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:30.847414  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:30.877768  346554 cri.go:89] found id: ""
	I1002 07:22:30.877795  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.877804  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:30.877811  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:30.877919  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:30.906930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.906954  346554 cri.go:89] found id: ""
	I1002 07:22:30.906970  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:30.907050  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.911004  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:30.911153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:30.936781  346554 cri.go:89] found id: ""
	I1002 07:22:30.936817  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.936826  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:30.936836  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:30.936849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.963944  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:30.963978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:31.039393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:31.039431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:31.056356  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:31.056396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:31.086443  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:31.086483  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:31.129305  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:31.129342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:31.206518  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:31.206557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:31.246963  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:31.246992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:31.349345  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:31.349380  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:31.424210  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:31.424235  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:31.424247  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:31.494342  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:31.494381  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.028701  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:34.039883  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:34.039955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:34.082124  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.082149  346554 cri.go:89] found id: ""
	I1002 07:22:34.082158  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:34.082222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.086333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:34.086408  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:34.115537  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.115562  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.115568  346554 cri.go:89] found id: ""
	I1002 07:22:34.115575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:34.115632  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.119540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.123109  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:34.123181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:34.149943  346554 cri.go:89] found id: ""
	I1002 07:22:34.149969  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.149978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:34.149985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:34.150098  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:34.177023  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.177044  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.177051  346554 cri.go:89] found id: ""
	I1002 07:22:34.177060  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:34.177117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.180893  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.184341  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:34.184418  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:34.211353  346554 cri.go:89] found id: ""
	I1002 07:22:34.211377  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.211385  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:34.211391  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:34.211449  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:34.237574  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:34.237593  346554 cri.go:89] found id: ""
	I1002 07:22:34.237601  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:34.237659  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.241551  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:34.241626  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:34.272007  346554 cri.go:89] found id: ""
	I1002 07:22:34.272030  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.272039  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:34.272048  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:34.272059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:34.344503  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:34.344540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:34.378151  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:34.378181  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:34.479542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:34.479579  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:34.561912  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:34.561988  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:34.562009  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.627010  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:34.627046  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.675398  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:34.675431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.761258  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:34.761301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:34.783800  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:34.783847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.822817  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:34.822856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.855272  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:34.855298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.390316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:37.401208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:37.401285  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:37.428835  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:37.428857  346554 cri.go:89] found id: ""
	I1002 07:22:37.428864  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:37.428934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.433201  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:37.433276  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:37.461633  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.461664  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.461670  346554 cri.go:89] found id: ""
	I1002 07:22:37.461678  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:37.461736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.465629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.469272  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:37.469348  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:37.498524  346554 cri.go:89] found id: ""
	I1002 07:22:37.498551  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.498561  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:37.498567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:37.498627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:37.535431  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:37.535453  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.535458  346554 cri.go:89] found id: ""
	I1002 07:22:37.535465  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:37.535523  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.539518  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.543351  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:37.543429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:37.569817  346554 cri.go:89] found id: ""
	I1002 07:22:37.569886  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.569912  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:37.569938  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:37.570048  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:37.600094  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.600161  346554 cri.go:89] found id: ""
	I1002 07:22:37.600184  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:37.600279  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.604474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:37.604627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:37.635043  346554 cri.go:89] found id: ""
	I1002 07:22:37.635139  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.635164  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:37.635209  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:37.635241  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:37.652712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:37.652747  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:37.724304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:37.724327  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:37.724343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.778979  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:37.779018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.823368  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:37.823400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.852458  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:37.852487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:37.935415  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:37.935451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:38.032660  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:38.032698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:38.062211  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:38.062292  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:38.141041  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:38.141076  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:38.167504  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:38.167535  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:40.716529  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:40.727155  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:40.727237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:40.759650  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:40.759670  346554 cri.go:89] found id: ""
	I1002 07:22:40.759677  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:40.759739  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.763794  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:40.763891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:40.799428  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:40.799495  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:40.799505  346554 cri.go:89] found id: ""
	I1002 07:22:40.799513  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:40.799587  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.804441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.808181  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:40.808256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:40.839434  346554 cri.go:89] found id: ""
	I1002 07:22:40.839458  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.839466  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:40.839479  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:40.839540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:40.866347  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:40.866368  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:40.866373  346554 cri.go:89] found id: ""
	I1002 07:22:40.866380  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:40.866435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.870243  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.873802  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:40.873887  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:40.915472  346554 cri.go:89] found id: ""
	I1002 07:22:40.915499  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.915508  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:40.915515  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:40.915589  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:40.945530  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:40.945552  346554 cri.go:89] found id: ""
	I1002 07:22:40.945570  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:40.945629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.949410  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:40.949513  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:40.976546  346554 cri.go:89] found id: ""
	I1002 07:22:40.976589  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.976598  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:40.976608  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:40.976620  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:40.993923  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:40.993952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:41.069718  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:41.069746  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:41.069760  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:41.101275  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:41.101313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:41.185486  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:41.185522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:41.213391  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:41.213419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:41.286933  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:41.286973  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:41.325032  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:41.325063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:41.427475  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:41.427517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:41.507722  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:41.507762  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:41.553697  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:41.553731  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.083713  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:44.094946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:44.095050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:44.122939  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.122961  346554 cri.go:89] found id: ""
	I1002 07:22:44.122970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:44.123027  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.126926  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:44.127001  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:44.168228  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.168253  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.168259  346554 cri.go:89] found id: ""
	I1002 07:22:44.168267  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:44.168325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.172203  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.176051  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:44.176154  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:44.207518  346554 cri.go:89] found id: ""
	I1002 07:22:44.207545  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.207554  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:44.207560  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:44.207619  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:44.236177  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:44.236200  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.236206  346554 cri.go:89] found id: ""
	I1002 07:22:44.236214  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:44.236274  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.239868  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.243456  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:44.243575  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:44.269491  346554 cri.go:89] found id: ""
	I1002 07:22:44.269568  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.269596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:44.269612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:44.269687  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:44.295403  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.295423  346554 cri.go:89] found id: ""
	I1002 07:22:44.295431  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:44.295490  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.299440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:44.299555  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:44.333034  346554 cri.go:89] found id: ""
	I1002 07:22:44.333110  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.333136  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:44.333175  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:44.333210  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.364108  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:44.364139  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:44.433101  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:44.433123  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:44.433137  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.489676  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:44.489711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.535780  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:44.535819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.563832  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:44.563862  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:44.644267  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:44.644308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:44.678038  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:44.678077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:44.779429  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:44.779467  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:44.802305  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:44.802335  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.828371  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:44.828400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.412789  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:47.423373  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:47.423464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:47.451136  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.451162  346554 cri.go:89] found id: ""
	I1002 07:22:47.451171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:47.451237  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.455412  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:47.455531  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:47.487387  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:47.487418  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:47.487424  346554 cri.go:89] found id: ""
	I1002 07:22:47.487432  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:47.487491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.491360  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.495265  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:47.495336  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:47.534120  346554 cri.go:89] found id: ""
	I1002 07:22:47.534144  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.534153  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:47.534159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:47.534223  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:47.567581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.567604  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:47.567610  346554 cri.go:89] found id: ""
	I1002 07:22:47.567618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:47.567676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.571558  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.575428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:47.575500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:47.604017  346554 cri.go:89] found id: ""
	I1002 07:22:47.604041  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.604050  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:47.604057  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:47.604178  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:47.631246  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.631266  346554 cri.go:89] found id: ""
	I1002 07:22:47.631275  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:47.631336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.635224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:47.635329  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:47.662879  346554 cri.go:89] found id: ""
	I1002 07:22:47.662906  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.662916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:47.662925  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:47.662969  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:47.758850  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:47.758889  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.787003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:47.787035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.865561  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:47.865598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.894009  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:47.894083  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:47.911472  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:47.911547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:47.992995  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:47.993061  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:47.993095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:48.054795  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:48.054833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:48.105647  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:48.105681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:48.136822  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:48.136852  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:48.221826  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:48.221868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:50.759146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:50.770232  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:50.770304  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:50.808978  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:50.808999  346554 cri.go:89] found id: ""
	I1002 07:22:50.809014  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:50.809071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.812891  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:50.812973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:50.844548  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:50.844621  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:50.844634  346554 cri.go:89] found id: ""
	I1002 07:22:50.844643  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:50.844704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.848854  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.853318  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:50.853395  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:50.879864  346554 cri.go:89] found id: ""
	I1002 07:22:50.879885  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.879894  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:50.879901  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:50.879978  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:50.913482  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:50.913502  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:50.913506  346554 cri.go:89] found id: ""
	I1002 07:22:50.913514  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:50.913571  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.917411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.920913  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:50.920995  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:50.953742  346554 cri.go:89] found id: ""
	I1002 07:22:50.953769  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.953778  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:50.953785  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:50.953849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:50.982216  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:50.982239  346554 cri.go:89] found id: ""
	I1002 07:22:50.982247  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:50.982312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.985960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:50.986036  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:51.023369  346554 cri.go:89] found id: ""
	I1002 07:22:51.023407  346554 logs.go:282] 0 containers: []
	W1002 07:22:51.023416  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:51.023425  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:51.023437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:51.124423  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:51.124471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:51.162362  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:51.162466  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:51.193077  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:51.193120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:51.209317  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:51.209348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:51.286706  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:51.286736  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:51.286768  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:51.314928  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:51.315005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:51.375178  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:51.375216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:51.450324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:51.450368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:51.478495  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:51.478526  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:51.563131  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:51.563178  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.112345  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:54.123567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:54.123643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:54.154215  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.154239  346554 cri.go:89] found id: ""
	I1002 07:22:54.154247  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:54.154306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.158242  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:54.158319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:54.192307  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.192332  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.192343  346554 cri.go:89] found id: ""
	I1002 07:22:54.192351  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:54.192419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.197194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.201582  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:54.201705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:54.228380  346554 cri.go:89] found id: ""
	I1002 07:22:54.228415  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.228425  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:54.228432  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:54.228525  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:54.256056  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.256080  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.256087  346554 cri.go:89] found id: ""
	I1002 07:22:54.256094  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:54.256155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.260143  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.263934  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:54.264008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:54.290214  346554 cri.go:89] found id: ""
	I1002 07:22:54.290241  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.290251  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:54.290256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:54.290314  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:54.319063  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.319117  346554 cri.go:89] found id: ""
	I1002 07:22:54.319126  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:54.319184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.323448  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:54.323547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:54.354341  346554 cri.go:89] found id: ""
	I1002 07:22:54.354366  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.354374  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:54.354384  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:54.354396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.409595  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:54.409633  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.449908  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:54.449944  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.532130  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:54.532170  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.559794  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:54.559822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.593620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:54.593651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:54.700915  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:54.700951  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.727426  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:54.727452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.756226  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:54.756263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:54.841269  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:54.841312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:54.859387  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:54.859425  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:54.940701  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.441672  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:57.453569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:57.453639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:57.483699  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:57.483722  346554 cri.go:89] found id: ""
	I1002 07:22:57.483746  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:57.483845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.487681  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:57.487775  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:57.518495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:57.518520  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:57.518526  346554 cri.go:89] found id: ""
	I1002 07:22:57.518534  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:57.518593  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.522615  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.526448  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:57.526523  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:57.553219  346554 cri.go:89] found id: ""
	I1002 07:22:57.553246  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.553255  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:57.553263  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:57.553327  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:57.582109  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.582132  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.582137  346554 cri.go:89] found id: ""
	I1002 07:22:57.582146  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:57.582209  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.586222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.590675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:57.590752  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:57.621475  346554 cri.go:89] found id: ""
	I1002 07:22:57.621544  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.621567  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:57.621592  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:57.621680  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:57.647238  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:57.647304  346554 cri.go:89] found id: ""
	I1002 07:22:57.647329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:57.647425  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.651299  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:57.651391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:57.681221  346554 cri.go:89] found id: ""
	I1002 07:22:57.681298  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.681324  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:57.681350  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:57.681387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.757042  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:57.757079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.789483  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:57.789519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:57.876258  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:57.876301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:57.909957  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:57.909986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:57.994768  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.994790  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:57.994804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:58.057805  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:58.057845  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:58.093196  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:58.093227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:58.192017  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:58.192055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:58.209558  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:58.209587  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:58.236404  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:58.236433  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.781745  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:00.796477  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:00.796552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:00.823241  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:00.823265  346554 cri.go:89] found id: ""
	I1002 07:23:00.823273  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:00.823327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.827586  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:00.827675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:00.862251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:00.862274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.862280  346554 cri.go:89] found id: ""
	I1002 07:23:00.862287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:00.862348  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.866453  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.870120  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:00.870189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:00.910250  346554 cri.go:89] found id: ""
	I1002 07:23:00.910318  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.910341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:00.910366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:00.910451  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:00.939142  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:00.939208  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:00.939234  346554 cri.go:89] found id: ""
	I1002 07:23:00.939243  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:00.939300  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.943281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.947110  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:00.947180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:00.979402  346554 cri.go:89] found id: ""
	I1002 07:23:00.979431  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.979444  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:00.979452  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:00.979518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:01.016038  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.016103  346554 cri.go:89] found id: ""
	I1002 07:23:01.016131  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:01.016225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:01.020366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:01.020520  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:01.049712  346554 cri.go:89] found id: ""
	I1002 07:23:01.049780  346554 logs.go:282] 0 containers: []
	W1002 07:23:01.049803  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:01.049831  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:01.049870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:01.101253  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:01.101287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:01.200014  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:01.200053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:01.277860  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:01.277885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:01.277898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:01.341507  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:01.341545  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:01.413278  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:01.413313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:01.446875  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:01.446914  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.475436  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:01.475464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:01.551813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:01.551853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:01.585150  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:01.585187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:01.601574  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:01.601606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.131042  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:04.142520  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:04.142634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:04.176669  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.176692  346554 cri.go:89] found id: ""
	I1002 07:23:04.176701  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:04.176763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.180972  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:04.181051  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:04.208821  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.208846  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.208851  346554 cri.go:89] found id: ""
	I1002 07:23:04.208859  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:04.208925  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.213191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.217006  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:04.217129  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:04.245751  346554 cri.go:89] found id: ""
	I1002 07:23:04.245775  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.245790  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:04.245798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:04.245859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:04.284664  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.284685  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.284689  346554 cri.go:89] found id: ""
	I1002 07:23:04.284697  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:04.284756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.288986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.292617  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:04.292700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:04.320145  346554 cri.go:89] found id: ""
	I1002 07:23:04.320171  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.320180  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:04.320187  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:04.320245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:04.347600  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.347622  346554 cri.go:89] found id: ""
	I1002 07:23:04.347631  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:04.347686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.351440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:04.351511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:04.383653  346554 cri.go:89] found id: ""
	I1002 07:23:04.383732  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.383749  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:04.383759  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:04.383775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.440177  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:04.440218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.468956  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:04.469027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:04.545741  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:04.545780  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:04.579865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:04.579895  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:04.681656  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:04.681695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:04.752352  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:04.752373  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:04.752387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.793420  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:04.793493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.864258  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:04.864293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.893921  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:04.894006  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:04.911663  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:04.911693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.444239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:07.455140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:07.455218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:07.484101  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.484124  346554 cri.go:89] found id: ""
	I1002 07:23:07.484133  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:07.484189  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.488067  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:07.488145  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:07.522958  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.523021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.523044  346554 cri.go:89] found id: ""
	I1002 07:23:07.523071  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:07.523194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.527249  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.531022  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:07.531124  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:07.557498  346554 cri.go:89] found id: ""
	I1002 07:23:07.557519  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.557528  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:07.557535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:07.557609  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:07.584061  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.584092  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.584096  346554 cri.go:89] found id: ""
	I1002 07:23:07.584105  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:07.584170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.587957  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.591564  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:07.591639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:07.619944  346554 cri.go:89] found id: ""
	I1002 07:23:07.619971  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.619980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:07.619987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:07.620050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:07.648834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:07.648855  346554 cri.go:89] found id: ""
	I1002 07:23:07.648863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:07.648919  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.652819  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:07.652937  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:07.682396  346554 cri.go:89] found id: ""
	I1002 07:23:07.682421  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.682430  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:07.682439  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:07.682452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:07.751625  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:07.751650  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:07.751667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.778524  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:07.778551  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.850872  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:07.850910  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.887246  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:07.887283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.959701  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:07.959738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.989632  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:07.989661  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:08.009848  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:08.009885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:08.041024  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:08.041052  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:08.120762  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:08.120798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:08.174204  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:08.174234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:10.791227  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:10.804748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:10.804834  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:10.833209  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:10.833256  346554 cri.go:89] found id: ""
	I1002 07:23:10.833264  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:10.833327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.837233  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:10.837307  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:10.867407  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:10.867431  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:10.867436  346554 cri.go:89] found id: ""
	I1002 07:23:10.867444  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:10.867501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.871289  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.874962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:10.875041  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:10.909346  346554 cri.go:89] found id: ""
	I1002 07:23:10.909372  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.909381  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:10.909388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:10.909444  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:10.944052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:10.944127  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:10.944152  346554 cri.go:89] found id: ""
	I1002 07:23:10.944181  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:10.944285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.952530  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.957003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:10.957085  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:10.984253  346554 cri.go:89] found id: ""
	I1002 07:23:10.984287  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.984297  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:10.984321  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:10.984401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:11.018350  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.018417  346554 cri.go:89] found id: ""
	I1002 07:23:11.018442  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:11.018520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:11.022612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:11.022707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:11.054294  346554 cri.go:89] found id: ""
	I1002 07:23:11.054371  346554 logs.go:282] 0 containers: []
	W1002 07:23:11.054394  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:11.054437  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:11.054471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:11.132821  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:11.132846  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:11.132859  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:11.161373  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:11.161401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:11.219899  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:11.219936  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.250524  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:11.250554  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:11.282533  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:11.282564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:11.385870  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:11.385909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:11.402968  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:11.402997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:11.447948  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:11.447983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:11.521218  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:11.521256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:11.551246  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:11.551320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.129146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:14.140212  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:14.140315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:14.167561  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:14.167585  346554 cri.go:89] found id: ""
	I1002 07:23:14.167593  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:14.167691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.171728  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:14.171841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:14.198571  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.198594  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.198600  346554 cri.go:89] found id: ""
	I1002 07:23:14.198607  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:14.198693  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.202658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.207962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:14.208057  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:14.233944  346554 cri.go:89] found id: ""
	I1002 07:23:14.233970  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.233979  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:14.233986  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:14.234064  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:14.264854  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.264878  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.264884  346554 cri.go:89] found id: ""
	I1002 07:23:14.264892  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:14.264948  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.268797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.272677  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:14.272756  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:14.304992  346554 cri.go:89] found id: ""
	I1002 07:23:14.305031  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.305041  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:14.305047  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:14.305120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:14.335500  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.335570  346554 cri.go:89] found id: ""
	I1002 07:23:14.335593  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:14.335684  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.339428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:14.339502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:14.366928  346554 cri.go:89] found id: ""
	I1002 07:23:14.366954  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.366964  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:14.366973  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:14.366984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.441765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:14.441808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.473510  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:14.473541  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.552162  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:14.552201  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:14.586130  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:14.586160  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:14.602135  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:14.602164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.638523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:14.638557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.717772  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:14.717808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.748211  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:14.748283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:14.848964  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:14.849003  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:14.926254  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:14.926277  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:14.926290  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.456912  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:17.467889  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:17.467979  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:17.495434  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.495457  346554 cri.go:89] found id: ""
	I1002 07:23:17.495466  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:17.495524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.499591  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:17.499663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:17.535737  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.535757  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:17.535761  346554 cri.go:89] found id: ""
	I1002 07:23:17.535768  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:17.535826  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.540069  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.543817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:17.543891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:17.573877  346554 cri.go:89] found id: ""
	I1002 07:23:17.573907  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.573917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:17.573923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:17.573989  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:17.609297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.609320  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:17.609326  346554 cri.go:89] found id: ""
	I1002 07:23:17.609333  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:17.609390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.613640  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.617183  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:17.617253  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:17.647944  346554 cri.go:89] found id: ""
	I1002 07:23:17.647971  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.647980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:17.647987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:17.648045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:17.674528  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:17.674552  346554 cri.go:89] found id: ""
	I1002 07:23:17.674561  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:17.674617  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.678979  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:17.679143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:17.706803  346554 cri.go:89] found id: ""
	I1002 07:23:17.706828  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.706837  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:17.706846  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:17.706857  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:17.801171  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:17.801207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:17.817922  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:17.817952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.889064  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:17.889103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.971481  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:17.971518  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:18.051668  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:18.051712  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:18.090695  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:18.090723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:18.162304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:18.162328  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:18.162343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:18.194200  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:18.194233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:18.231522  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:18.231557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:18.263215  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:18.263246  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:20.795234  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:20.807871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:20.807939  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:20.839049  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:20.839070  346554 cri.go:89] found id: ""
	I1002 07:23:20.839098  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:20.839172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.842946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:20.843023  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:20.873446  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:20.873469  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:20.873475  346554 cri.go:89] found id: ""
	I1002 07:23:20.873484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:20.873540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.877435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.881337  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:20.881415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:20.918940  346554 cri.go:89] found id: ""
	I1002 07:23:20.918971  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.918980  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:20.918987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:20.919046  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:20.951052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:20.951075  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:20.951112  346554 cri.go:89] found id: ""
	I1002 07:23:20.951120  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:20.951185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.955805  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.959649  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:20.959737  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:20.987685  346554 cri.go:89] found id: ""
	I1002 07:23:20.987710  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.987719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:20.987726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:20.987792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:21.028577  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.028602  346554 cri.go:89] found id: ""
	I1002 07:23:21.028622  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:21.028683  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:21.032899  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:21.032977  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:21.062654  346554 cri.go:89] found id: ""
	I1002 07:23:21.062679  346554 logs.go:282] 0 containers: []
	W1002 07:23:21.062688  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:21.062698  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:21.062710  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:21.091027  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:21.091059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:21.159267  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:21.159307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:21.231814  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:21.231856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:21.263174  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:21.263205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:21.310161  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:21.310194  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:21.349961  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:21.349997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.379224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:21.379306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:21.454682  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:21.454722  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:21.560920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:21.560960  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:21.578179  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:21.578211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:21.668218  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.169201  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:24.181390  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:24.181463  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:24.213873  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:24.213896  346554 cri.go:89] found id: ""
	I1002 07:23:24.213905  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:24.213963  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.217730  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:24.217807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:24.252439  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.252471  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.252476  346554 cri.go:89] found id: ""
	I1002 07:23:24.252484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:24.252567  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.256307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.260273  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:24.260349  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:24.287826  346554 cri.go:89] found id: ""
	I1002 07:23:24.287852  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.287862  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:24.287870  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:24.287973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:24.315859  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.315884  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.315890  346554 cri.go:89] found id: ""
	I1002 07:23:24.315897  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:24.315975  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.319993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.323777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:24.323877  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:24.354601  346554 cri.go:89] found id: ""
	I1002 07:23:24.354631  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.354642  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:24.354648  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:24.354730  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:24.384370  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.384395  346554 cri.go:89] found id: ""
	I1002 07:23:24.384403  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:24.384488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.388615  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:24.388695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:24.415488  346554 cri.go:89] found id: ""
	I1002 07:23:24.415514  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.415523  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:24.415533  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:24.415546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.458158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:24.458192  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.534624  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:24.534667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.567982  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:24.568016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.596275  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:24.596306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:24.674293  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:24.674334  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:24.777997  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:24.778039  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:24.801006  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:24.801036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.862265  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:24.862303  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:24.913721  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:24.913755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:24.991414  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.991443  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:24.991458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.525665  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:27.536783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:27.536869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:27.563440  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.563507  346554 cri.go:89] found id: ""
	I1002 07:23:27.563531  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:27.563623  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.568154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:27.568278  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:27.597184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:27.597205  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:27.597211  346554 cri.go:89] found id: ""
	I1002 07:23:27.597230  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:27.597306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.601073  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.604808  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:27.604880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:27.635124  346554 cri.go:89] found id: ""
	I1002 07:23:27.635147  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.635155  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:27.635161  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:27.635220  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:27.662383  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:27.662455  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:27.662474  346554 cri.go:89] found id: ""
	I1002 07:23:27.662500  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:27.662607  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.666537  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.670164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:27.670238  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:27.697001  346554 cri.go:89] found id: ""
	I1002 07:23:27.697028  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.697037  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:27.697044  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:27.697127  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:27.722638  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:27.722662  346554 cri.go:89] found id: ""
	I1002 07:23:27.722672  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:27.722728  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.726512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:27.726591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:27.755270  346554 cri.go:89] found id: ""
	I1002 07:23:27.755300  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.755309  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:27.755319  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:27.755330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:27.854338  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:27.854379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:27.928550  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:27.928577  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:27.928590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.960015  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:27.960047  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:28.025647  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:28.025706  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:28.064089  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:28.064125  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:28.158385  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:28.158423  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:28.196505  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:28.196533  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:28.215893  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:28.215921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:28.246774  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:28.246821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:28.274010  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:28.274036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:30.852724  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:30.863588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:30.863660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:30.891349  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:30.891371  346554 cri.go:89] found id: ""
	I1002 07:23:30.891380  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:30.891457  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.895249  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:30.895343  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:30.922333  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:30.922356  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:30.922361  346554 cri.go:89] found id: ""
	I1002 07:23:30.922368  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:30.922423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.926269  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.929885  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:30.929957  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:30.956216  346554 cri.go:89] found id: ""
	I1002 07:23:30.956253  346554 logs.go:282] 0 containers: []
	W1002 07:23:30.956269  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:30.956285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:30.956347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:30.984076  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:30.984101  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:30.984107  346554 cri.go:89] found id: ""
	I1002 07:23:30.984121  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:30.984182  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.988082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.991650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:30.991741  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:31.028148  346554 cri.go:89] found id: ""
	I1002 07:23:31.028174  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.028184  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:31.028190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:31.028274  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:31.057090  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.057116  346554 cri.go:89] found id: ""
	I1002 07:23:31.057125  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:31.057195  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:31.064614  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:31.064695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:31.096928  346554 cri.go:89] found id: ""
	I1002 07:23:31.096996  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.097022  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:31.097042  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:31.097069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:31.155662  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:31.155701  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:31.202926  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:31.202958  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:31.236483  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:31.236508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:31.341179  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:31.341216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:31.368996  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:31.369022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:31.449499  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:31.449539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.476326  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:31.476354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:31.561871  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:31.561909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:31.597214  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:31.597243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:31.614646  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:31.614674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:31.686141  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.187051  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:34.198084  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:34.198163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:34.225977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.226000  346554 cri.go:89] found id: ""
	I1002 07:23:34.226009  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:34.226094  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.230977  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:34.231053  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:34.258817  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.258840  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:34.258845  346554 cri.go:89] found id: ""
	I1002 07:23:34.258853  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:34.258908  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.262894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.266671  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:34.266772  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:34.296183  346554 cri.go:89] found id: ""
	I1002 07:23:34.296207  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.296217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:34.296223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:34.296283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:34.329604  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.329678  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.329698  346554 cri.go:89] found id: ""
	I1002 07:23:34.329722  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:34.329830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.333641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.337102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:34.337170  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:34.365600  346554 cri.go:89] found id: ""
	I1002 07:23:34.365626  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.365636  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:34.365645  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:34.365708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:34.393323  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.393347  346554 cri.go:89] found id: ""
	I1002 07:23:34.393357  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:34.393439  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.397338  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:34.397411  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:34.423876  346554 cri.go:89] found id: ""
	I1002 07:23:34.423899  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.423908  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:34.423918  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:34.423934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.453221  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:34.453251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.481067  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:34.481095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:34.558614  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:34.558651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:34.601917  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:34.601948  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:34.705602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:34.705637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:34.769442  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.769466  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:34.769478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.808589  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:34.808615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.869982  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:34.870024  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.959694  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:34.959739  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:34.976284  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:34.976319  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.518488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:37.530159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:37.530242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:37.557004  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:37.557026  346554 cri.go:89] found id: ""
	I1002 07:23:37.557035  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:37.557091  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.560903  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:37.560976  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:37.593556  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.593580  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.593586  346554 cri.go:89] found id: ""
	I1002 07:23:37.593594  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:37.593652  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.597692  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.601598  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:37.601672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:37.628723  346554 cri.go:89] found id: ""
	I1002 07:23:37.628751  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.628761  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:37.628767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:37.628832  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:37.656989  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:37.657010  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:37.657014  346554 cri.go:89] found id: ""
	I1002 07:23:37.657022  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:37.657090  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.660940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.664730  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:37.664810  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:37.690545  346554 cri.go:89] found id: ""
	I1002 07:23:37.690567  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.690575  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:37.690582  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:37.690638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:37.718139  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:37.718164  346554 cri.go:89] found id: ""
	I1002 07:23:37.718173  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:37.718239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.722013  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:37.722130  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:37.748320  346554 cri.go:89] found id: ""
	I1002 07:23:37.748387  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.748410  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:37.748439  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:37.748478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:37.848896  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:37.848937  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:37.935000  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:37.935035  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:37.935050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.998904  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:37.998949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:38.039239  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:38.039274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:38.133839  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:38.133878  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:38.164590  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:38.164617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:38.247363  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:38.247401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:38.263025  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:38.263053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:38.292185  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:38.292215  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:38.324631  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:38.324662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:40.856053  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:40.866969  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:40.867037  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:40.908779  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:40.908802  346554 cri.go:89] found id: ""
	I1002 07:23:40.908811  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:40.908882  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.912652  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:40.912724  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:40.938681  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:40.938711  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:40.938717  346554 cri.go:89] found id: ""
	I1002 07:23:40.938725  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:40.938780  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.942512  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.945790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:40.945860  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:40.973961  346554 cri.go:89] found id: ""
	I1002 07:23:40.974043  346554 logs.go:282] 0 containers: []
	W1002 07:23:40.974067  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:40.974093  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:40.974208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:41.001128  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.001152  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.001158  346554 cri.go:89] found id: ""
	I1002 07:23:41.001165  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:41.001239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.007592  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.012525  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:41.012642  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:41.044447  346554 cri.go:89] found id: ""
	I1002 07:23:41.044521  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.044545  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:41.044571  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:41.044654  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:41.083149  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.083216  346554 cri.go:89] found id: ""
	I1002 07:23:41.083250  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:41.083338  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.087534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:41.087663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:41.118406  346554 cri.go:89] found id: ""
	I1002 07:23:41.118470  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.118494  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:41.118528  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:41.118559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.195975  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:41.196011  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.227140  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:41.227172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:41.313141  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:41.313180  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:41.416180  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:41.416218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:41.459495  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:41.459536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.488753  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:41.488785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:41.532527  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:41.532560  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:41.548856  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:41.548885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:41.618600  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:41.618624  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:41.618638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:41.646628  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:41.646656  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.221221  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:44.231877  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:44.231950  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:44.257682  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.257714  346554 cri.go:89] found id: ""
	I1002 07:23:44.257724  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:44.257781  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.261470  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:44.261568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:44.291709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.291732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.291738  346554 cri.go:89] found id: ""
	I1002 07:23:44.291749  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:44.291806  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.295774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.299744  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:44.299891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:44.326325  346554 cri.go:89] found id: ""
	I1002 07:23:44.326361  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.326372  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:44.326396  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:44.326476  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:44.353658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.353682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.353687  346554 cri.go:89] found id: ""
	I1002 07:23:44.353694  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:44.353752  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.357660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.361374  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:44.361448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:44.390237  346554 cri.go:89] found id: ""
	I1002 07:23:44.390271  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.390281  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:44.390287  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:44.390356  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:44.421420  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.421444  346554 cri.go:89] found id: ""
	I1002 07:23:44.421453  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:44.421520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.425406  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:44.425480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:44.453498  346554 cri.go:89] found id: ""
	I1002 07:23:44.453575  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.453599  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:44.453627  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:44.453663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:44.469406  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:44.469489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:44.537881  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:44.537947  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:44.537976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.566669  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:44.566750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.626234  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:44.626311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.663981  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:44.664015  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.743176  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:44.743211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.769609  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:44.769637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:44.850618  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:44.850654  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:44.956047  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:44.956089  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.988388  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:44.988421  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:47.617924  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:47.629050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:47.629142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:47.657724  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:47.657747  346554 cri.go:89] found id: ""
	I1002 07:23:47.657756  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:47.657814  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.661805  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:47.661878  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:47.691884  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:47.691906  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:47.691911  346554 cri.go:89] found id: ""
	I1002 07:23:47.691919  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:47.691978  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.695983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.699611  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:47.699685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:47.731628  346554 cri.go:89] found id: ""
	I1002 07:23:47.731654  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.731664  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:47.731671  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:47.731732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:47.760694  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:47.760718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:47.760723  346554 cri.go:89] found id: ""
	I1002 07:23:47.760731  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:47.760830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.764776  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.768282  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:47.768363  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:47.800941  346554 cri.go:89] found id: ""
	I1002 07:23:47.800967  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.800976  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:47.800982  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:47.801049  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:47.828847  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.828870  346554 cri.go:89] found id: ""
	I1002 07:23:47.828879  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:47.828955  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.832777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:47.832850  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:47.861095  346554 cri.go:89] found id: ""
	I1002 07:23:47.861122  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.861131  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:47.861141  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:47.861184  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.893617  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:47.893649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:47.990939  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:47.990977  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:48.007073  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:48.007153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:48.043757  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:48.043786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:48.136713  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:48.136750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:48.168119  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:48.168151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:48.251880  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:48.251919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:48.285530  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:48.285566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:48.357500  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:48.357522  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:48.357537  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:48.403215  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:48.403293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.006650  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:51.028354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:51.028471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:51.057229  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.057253  346554 cri.go:89] found id: ""
	I1002 07:23:51.057262  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:51.057329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.061731  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:51.061807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:51.089750  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.089772  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.089778  346554 cri.go:89] found id: ""
	I1002 07:23:51.089785  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:51.089848  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.094055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.097989  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:51.098090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:51.125460  346554 cri.go:89] found id: ""
	I1002 07:23:51.125487  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.125510  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:51.125536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:51.125611  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:51.155658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.155684  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.155689  346554 cri.go:89] found id: ""
	I1002 07:23:51.155698  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:51.155757  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.159937  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.164562  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:51.164639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:51.194590  346554 cri.go:89] found id: ""
	I1002 07:23:51.194626  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.194635  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:51.194642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:51.194720  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:51.230400  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.230424  346554 cri.go:89] found id: ""
	I1002 07:23:51.230433  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:51.230501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.235241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:51.235335  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:51.264526  346554 cri.go:89] found id: ""
	I1002 07:23:51.264551  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.264562  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:51.264573  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:51.264603  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.292045  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:51.292128  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.377066  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:51.377104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.408242  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:51.408273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.437071  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:51.437100  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:51.508699  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:51.508723  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:51.508736  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.594052  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:51.594094  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.631968  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:51.632002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:51.710908  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:51.710950  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:51.751275  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:51.751309  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:51.859428  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:51.859510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.376917  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:54.388247  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:54.388322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:54.417539  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.417563  346554 cri.go:89] found id: ""
	I1002 07:23:54.417571  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:54.417634  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.421536  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:54.421612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:54.452318  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.452342  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:54.452347  346554 cri.go:89] found id: ""
	I1002 07:23:54.452355  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:54.452410  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.457434  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.460992  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:54.461070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:54.494010  346554 cri.go:89] found id: ""
	I1002 07:23:54.494031  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.494040  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:54.494045  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:54.494107  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:54.528280  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:54.528300  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.528305  346554 cri.go:89] found id: ""
	I1002 07:23:54.528312  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:54.528369  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.532283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.535876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:54.535946  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:54.564214  346554 cri.go:89] found id: ""
	I1002 07:23:54.564240  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.564250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:54.564256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:54.564347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:54.594060  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:54.594084  346554 cri.go:89] found id: ""
	I1002 07:23:54.594093  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:54.594169  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.598344  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:54.598442  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:54.632402  346554 cri.go:89] found id: ""
	I1002 07:23:54.632426  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.632435  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:54.632445  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:54.632500  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:54.729477  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:54.729517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:54.800743  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:54.800815  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:54.800846  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.861032  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:54.861069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.889171  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:54.889244  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:54.925585  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:54.925615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.941174  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:54.941202  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.969205  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:54.969235  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:55.020047  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:55.020087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:55.098725  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:55.098805  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:55.132210  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:55.132239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:57.716428  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:57.730713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:57.730787  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:57.757853  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:57.757878  346554 cri.go:89] found id: ""
	I1002 07:23:57.757887  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:57.757943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.761971  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:57.762045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:57.790866  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:57.790891  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.790897  346554 cri.go:89] found id: ""
	I1002 07:23:57.790904  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:57.790962  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.795621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.799575  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:57.799653  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:57.830281  346554 cri.go:89] found id: ""
	I1002 07:23:57.830307  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.830317  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:57.830323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:57.830382  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:57.858397  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:57.858420  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:57.858425  346554 cri.go:89] found id: ""
	I1002 07:23:57.858433  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:57.858488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.862244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.865851  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:57.865951  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:57.893160  346554 cri.go:89] found id: ""
	I1002 07:23:57.893234  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.893250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:57.893258  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:57.893318  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:57.920413  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:57.920499  346554 cri.go:89] found id: ""
	I1002 07:23:57.920516  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:57.920585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.924327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:57.924423  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:57.951174  346554 cri.go:89] found id: ""
	I1002 07:23:57.951197  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.951206  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:57.951216  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:57.951268  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.986550  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:57.986632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:58.017224  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:58.017260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:58.122339  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:58.122377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:58.138465  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:58.138494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:58.168292  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:58.168317  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:58.230852  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:58.230890  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:58.328715  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:58.328764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:58.357761  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:58.357792  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:58.444436  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:58.444482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:58.478280  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:58.478306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:58.560395  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.061663  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:01.077726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:01.077804  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:01.106834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:01.106860  346554 cri.go:89] found id: ""
	I1002 07:24:01.106869  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:01.106940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.110940  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:01.111014  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:01.139370  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.139392  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.139397  346554 cri.go:89] found id: ""
	I1002 07:24:01.139404  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:01.139466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.143857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.148114  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:01.148207  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:01.178376  346554 cri.go:89] found id: ""
	I1002 07:24:01.178468  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.178493  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:01.178522  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:01.178635  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:01.208075  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.208098  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.208103  346554 cri.go:89] found id: ""
	I1002 07:24:01.208111  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:01.208178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.212014  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.216098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:01.216233  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:01.245384  346554 cri.go:89] found id: ""
	I1002 07:24:01.245424  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.245434  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:01.245440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:01.245503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:01.282247  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.282322  346554 cri.go:89] found id: ""
	I1002 07:24:01.282346  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:01.282443  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.288826  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:01.288905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:01.319901  346554 cri.go:89] found id: ""
	I1002 07:24:01.319926  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.319934  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:01.319943  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:01.319956  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.389606  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:01.389692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.444021  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:01.444055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.526762  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:01.526804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.559019  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:01.559049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:01.634782  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:01.634818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:01.709026  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.709100  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:01.709120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.738970  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:01.739000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:01.770329  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:01.770364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:01.884154  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:01.884232  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:01.902364  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:01.902390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.435943  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:04.447669  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:04.447785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:04.478942  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.478965  346554 cri.go:89] found id: ""
	I1002 07:24:04.478974  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:04.479030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.483417  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:04.483511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:04.518294  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:04.518320  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.518325  346554 cri.go:89] found id: ""
	I1002 07:24:04.518334  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:04.518388  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.522223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.526427  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:04.526558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:04.558950  346554 cri.go:89] found id: ""
	I1002 07:24:04.558987  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.558996  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:04.559003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:04.559153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:04.586620  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:04.586645  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:04.586650  346554 cri.go:89] found id: ""
	I1002 07:24:04.586658  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:04.586737  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.590676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.594540  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:04.594644  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:04.621686  346554 cri.go:89] found id: ""
	I1002 07:24:04.621709  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.621719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:04.621725  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:04.621781  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:04.649834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:04.649855  346554 cri.go:89] found id: ""
	I1002 07:24:04.649863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:04.649944  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.654335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:04.654436  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:04.687143  346554 cri.go:89] found id: ""
	I1002 07:24:04.687166  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.687175  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:04.687184  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:04.687216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.715298  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:04.715329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.758402  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:04.758436  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:04.838751  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:04.838789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:04.870372  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:04.870403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:04.984168  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:04.984207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:04.999826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:04.999858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:05.088672  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:05.088696  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:05.088709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:05.150024  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:05.150063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:05.226780  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:05.226819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:05.255567  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:05.255605  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.791197  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:07.803594  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:07.803689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:07.833077  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:07.833103  346554 cri.go:89] found id: ""
	I1002 07:24:07.833113  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:07.833214  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.837537  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:07.837661  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:07.866899  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:07.866926  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:07.866932  346554 cri.go:89] found id: ""
	I1002 07:24:07.866939  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:07.867000  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.870759  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.874593  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:07.874713  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:07.903524  346554 cri.go:89] found id: ""
	I1002 07:24:07.903587  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.903620  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:07.903644  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:07.903738  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:07.934472  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:07.934547  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:07.934567  346554 cri.go:89] found id: ""
	I1002 07:24:07.934593  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:07.934688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.938660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.942349  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:07.942453  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:07.969924  346554 cri.go:89] found id: ""
	I1002 07:24:07.969947  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.969956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:07.969964  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:07.970022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:07.998801  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.998826  346554 cri.go:89] found id: ""
	I1002 07:24:07.998834  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:07.998890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:08.006051  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:08.006218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:08.043683  346554 cri.go:89] found id: ""
	I1002 07:24:08.043712  346554 logs.go:282] 0 containers: []
	W1002 07:24:08.043723  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:08.043733  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:08.043746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:08.094506  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:08.094546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:08.175873  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:08.175912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:08.208161  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:08.208191  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:08.234954  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:08.234983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:08.301287  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:08.301325  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:08.377087  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:08.377123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:08.405378  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:08.405407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:08.431355  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:08.431386  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:08.536433  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:08.536479  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:08.553542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:08.553575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:08.621305  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.122975  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:11.135150  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:11.135231  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:11.168608  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.168633  346554 cri.go:89] found id: ""
	I1002 07:24:11.168642  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:11.168704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.172810  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:11.172893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:11.204325  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.204401  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.204413  346554 cri.go:89] found id: ""
	I1002 07:24:11.204422  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:11.204491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.208514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.212208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:11.212287  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:11.245698  346554 cri.go:89] found id: ""
	I1002 07:24:11.245725  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.245736  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:11.245743  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:11.245805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:11.274196  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.274219  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:11.274224  346554 cri.go:89] found id: ""
	I1002 07:24:11.274231  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:11.274292  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.278411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.282735  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:11.282813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:11.322108  346554 cri.go:89] found id: ""
	I1002 07:24:11.322129  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.322138  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:11.322144  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:11.322203  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:11.350582  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.350647  346554 cri.go:89] found id: ""
	I1002 07:24:11.350659  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:11.350715  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.354559  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:11.354628  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:11.386834  346554 cri.go:89] found id: ""
	I1002 07:24:11.386899  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.386923  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:11.386951  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:11.386981  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:11.465595  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:11.465632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.541894  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:11.541933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.619365  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:11.619408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.647305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:11.647336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:11.686923  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:11.686952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:11.792344  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:11.792440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:11.814593  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:11.814623  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:11.895211  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.895236  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:11.895250  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.921556  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:11.921586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.957833  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:11.957872  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.490490  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:14.502377  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:14.502482  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:14.534162  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.534185  346554 cri.go:89] found id: ""
	I1002 07:24:14.534205  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:14.534262  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.538631  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:14.538701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:14.568427  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.568450  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.568456  346554 cri.go:89] found id: ""
	I1002 07:24:14.568463  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:14.568527  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.572917  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.576683  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:14.576760  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:14.604778  346554 cri.go:89] found id: ""
	I1002 07:24:14.604809  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.604819  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:14.604825  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:14.604932  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:14.631788  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:14.631812  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.631817  346554 cri.go:89] found id: ""
	I1002 07:24:14.631824  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:14.631887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.635951  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.639653  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:14.639769  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:14.682797  346554 cri.go:89] found id: ""
	I1002 07:24:14.682823  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.682832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:14.682839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:14.682899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:14.722146  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:14.722175  346554 cri.go:89] found id: ""
	I1002 07:24:14.722183  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:14.722239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.727035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:14.727164  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:14.759413  346554 cri.go:89] found id: ""
	I1002 07:24:14.759438  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.759447  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:14.759458  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:14.759470  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.786929  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:14.787000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.853005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:14.853042  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.899040  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:14.899071  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:15.004708  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:15.004742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:15.123051  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:15.123106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:15.154325  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:15.154357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:15.183161  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:15.183248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:15.265975  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:15.266013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:15.299575  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:15.299607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:15.315427  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:15.315454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:15.394115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:17.895569  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:17.909876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:17.909985  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:17.941059  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:17.941083  346554 cri.go:89] found id: ""
	I1002 07:24:17.941092  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:17.941159  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.945318  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:17.945401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:17.973722  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:17.973743  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:17.973747  346554 cri.go:89] found id: ""
	I1002 07:24:17.973755  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:17.973813  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.978340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.983135  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:17.983214  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:18.024398  346554 cri.go:89] found id: ""
	I1002 07:24:18.024424  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.024433  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:18.024440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:18.024518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:18.053513  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.053535  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.053540  346554 cri.go:89] found id: ""
	I1002 07:24:18.053548  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:18.053631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.057706  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.061744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:18.061820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:18.093847  346554 cri.go:89] found id: ""
	I1002 07:24:18.093873  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.093884  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:18.093891  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:18.093956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:18.123256  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.123283  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.123289  346554 cri.go:89] found id: ""
	I1002 07:24:18.123296  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:18.123355  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.127263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.131206  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:18.131284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:18.157688  346554 cri.go:89] found id: ""
	I1002 07:24:18.157714  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.157724  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:18.157733  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:18.157745  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:18.203920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:18.203946  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:18.220036  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:18.220064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:18.288859  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:18.288885  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:18.288898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:18.326029  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:18.326064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.410880  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:18.410919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:18.516955  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:18.516994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:18.548753  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:18.548786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:18.613812  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:18.613849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.643416  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:18.643444  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.670170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:18.670199  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.699194  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:18.699231  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.274356  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:21.285713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:21.285785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:21.312389  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.312413  346554 cri.go:89] found id: ""
	I1002 07:24:21.312427  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:21.312492  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.316212  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:21.316290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:21.341368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:21.341390  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.341396  346554 cri.go:89] found id: ""
	I1002 07:24:21.341403  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:21.341458  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.345157  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.348764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:21.348841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:21.381263  346554 cri.go:89] found id: ""
	I1002 07:24:21.381292  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.381302  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:21.381308  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:21.381366  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:21.412001  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:21.412022  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.412027  346554 cri.go:89] found id: ""
	I1002 07:24:21.412035  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:21.412092  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.415991  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.419745  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:21.419818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:21.448790  346554 cri.go:89] found id: ""
	I1002 07:24:21.448817  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.448826  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:21.448832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:21.448894  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:21.476863  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.476885  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.476890  346554 cri.go:89] found id: ""
	I1002 07:24:21.476897  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:21.476995  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.481180  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.484939  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:21.485015  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:21.518979  346554 cri.go:89] found id: ""
	I1002 07:24:21.519005  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.519014  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:21.519023  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:21.519035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.548837  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:21.548868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.577649  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:21.577678  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.614505  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:21.614538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.648602  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:21.648630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.730478  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:21.730515  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:21.770385  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:21.770420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:21.869953  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:21.869990  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:21.890825  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:21.890864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:21.963492  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:21.963514  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:21.963531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.990531  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:21.990559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:22.069923  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:22.070005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.652448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:24.663850  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:24.663928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:24.691270  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:24.691349  346554 cri.go:89] found id: ""
	I1002 07:24:24.691385  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:24.691483  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.695776  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:24.695846  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:24.722540  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:24.722563  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:24.722568  346554 cri.go:89] found id: ""
	I1002 07:24:24.722575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:24.722641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.726529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.730111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:24.730184  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:24.760973  346554 cri.go:89] found id: ""
	I1002 07:24:24.760999  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.761009  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:24.761015  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:24.761096  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:24.788682  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.788702  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:24.788707  346554 cri.go:89] found id: ""
	I1002 07:24:24.788714  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:24.788771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.795284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.800831  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:24.800927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:24.826399  346554 cri.go:89] found id: ""
	I1002 07:24:24.826434  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.826443  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:24.826464  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:24.826550  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:24.854301  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:24.854328  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:24.854334  346554 cri.go:89] found id: ""
	I1002 07:24:24.854341  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:24.854423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.858547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.862285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:24.862407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:24.892024  346554 cri.go:89] found id: ""
	I1002 07:24:24.892048  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.892057  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:24.892067  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:24.892079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:24.993633  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:24.993672  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:25.023967  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:25.023999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:25.088069  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:25.088104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:25.171716  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:25.171754  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:25.211296  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:25.211330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:25.277865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:25.277888  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:25.277901  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:25.305336  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:25.305363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:25.339149  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:25.339311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:25.419370  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:25.419407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:25.452415  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:25.452447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:25.482792  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:25.482824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:28.019833  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:28.031976  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:28.032047  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:28.061518  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.061538  346554 cri.go:89] found id: ""
	I1002 07:24:28.061547  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:28.061610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.065737  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:28.065812  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:28.100250  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.100274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.100280  346554 cri.go:89] found id: ""
	I1002 07:24:28.100287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:28.100347  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.104729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.109130  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:28.109242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:28.136194  346554 cri.go:89] found id: ""
	I1002 07:24:28.136220  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.136229  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:28.136235  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:28.136294  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:28.177728  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.177751  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.177756  346554 cri.go:89] found id: ""
	I1002 07:24:28.177764  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:28.177822  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.182057  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.185909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:28.185984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:28.213081  346554 cri.go:89] found id: ""
	I1002 07:24:28.213104  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.213114  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:28.213120  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:28.213180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:28.242037  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.242061  346554 cri.go:89] found id: ""
	I1002 07:24:28.242070  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:28.242125  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.245909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:28.245982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:28.272643  346554 cri.go:89] found id: ""
	I1002 07:24:28.272688  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.272698  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:28.272708  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:28.272741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:28.368590  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:28.368674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:28.441922  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:28.441993  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:28.442025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.485137  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:28.485174  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.519916  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:28.519949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.547334  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:28.547364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:28.578668  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:28.578698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:28.597024  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:28.597053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.625533  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:28.625562  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.703945  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:28.703983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.782221  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:28.782256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.363217  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:31.375576  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:31.375651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:31.412392  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.412416  346554 cri.go:89] found id: ""
	I1002 07:24:31.412425  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:31.412489  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.416397  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:31.416497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:31.447142  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:31.447172  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.447178  346554 cri.go:89] found id: ""
	I1002 07:24:31.447186  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:31.447245  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.451130  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.454872  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:31.454972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:31.491372  346554 cri.go:89] found id: ""
	I1002 07:24:31.491393  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.491401  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:31.491407  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:31.491464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:31.523581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:31.523606  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.523611  346554 cri.go:89] found id: ""
	I1002 07:24:31.523618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:31.523696  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.527714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.531521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:31.531638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:31.557016  346554 cri.go:89] found id: ""
	I1002 07:24:31.557090  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.557110  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:31.557117  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:31.557180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:31.587792  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:31.587815  346554 cri.go:89] found id: ""
	I1002 07:24:31.587824  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:31.587900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.591474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:31.591544  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:31.621938  346554 cri.go:89] found id: ""
	I1002 07:24:31.622002  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.622025  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:31.622057  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:31.622087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.699830  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:31.699940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:31.731270  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:31.731297  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:31.830036  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:31.830073  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:31.849448  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:31.849489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.887973  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:31.888002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.925845  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:31.925879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.955314  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:31.955344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:32.027448  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:32.027527  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:32.027556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:32.097086  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:32.097123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:32.181841  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:32.181877  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:34.710633  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:34.725897  346554 out.go:203] 
	W1002 07:24:34.728826  346554 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1002 07:24:34.728867  346554 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1002 07:24:34.728877  346554 out.go:285] * Related issues:
	W1002 07:24:34.728892  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1002 07:24:34.728908  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1002 07:24:34.732168  346554 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:19:49 ha-550225 crio[619]: time="2025-10-02T07:19:49.845674437Z" level=info msg="Started container" PID=1394 containerID=3269c04f5498e2befbc42b6cf2cdbe83a291623d3fde767dc07389c7422afd48 description=kube-system/coredns-66bc5c9577-s6dq8/coredns id=566bb378-7524-4452-b1e6-a25280ba5d7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e055873f04c2899609f0c3b597c607526b01fd136aa0e5f79f2676a446255f13
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.208804519Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215218136Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215264529Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215287667Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.22352303Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.223562538Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.223586029Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.23080621Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.230844857Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.230864434Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.236373132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.236409153Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:20:15 ha-550225 conmon[1183]: conmon 48fccb25ba33b3850afc <ninfo>: container 1186 exited with status 1
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.461105809Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5008df2b-58c5-42b1-a1f6-e14a10f1abbb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.46213329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b8ddfc43-aba7-4f99-b91d-97240f3eaf35 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.46331964Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=55bd6811-47fe-4715-9579-6244ca41dc93 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.463596057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.472956017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.47327584Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6958a022ca5d2e537c24f18da644191de8f0c379072dbf05004476abea1680e8/merged/etc/passwd: no such file or directory"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.473326269Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6958a022ca5d2e537c24f18da644191de8f0c379072dbf05004476abea1680e8/merged/etc/group: no such file or directory"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.473692689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.493904849Z" level=info msg="Created container 5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71: kube-system/storage-provisioner/storage-provisioner" id=55bd6811-47fe-4715-9579-6244ca41dc93 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.495150407Z" level=info msg="Starting container: 5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71" id=b45832b0-a0c9-4ad1-8a10-5fba7e2ccb21 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.499183546Z" level=info msg="Started container" PID=1457 containerID=5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71 description=kube-system/storage-provisioner/storage-provisioner id=b45832b0-a0c9-4ad1-8a10-5fba7e2ccb21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc2b31ede15861c2d07fce3991053334dcdd31f17b14021784ac1be8ed7e0b31
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5b2624a029b4c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       2                   bc2b31ede1586       storage-provisioner                 kube-system
	3269c04f5498e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   e055873f04c28       coredns-66bc5c9577-s6dq8            kube-system
	448d4967d9024       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   4 minutes ago       Running             busybox                   1                   e934129b46d08       busybox-7b57f96db7-gph4b            default
	8a9ee715e4343       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               1                   edd2550dab874       kindnet-v7wnc                       kube-system
	5051222f30f0a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 minutes ago       Running             kube-proxy                1                   3e269f3dd585c       kube-proxy-skqs2                    kube-system
	48fccb25ba33b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       1                   bc2b31ede1586       storage-provisioner                 kube-system
	97a0ea46cf7f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   70fe4e27581bb       coredns-66bc5c9577-7gnh8            kube-system
	0dcd791f01f43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   11                  19a2185d4a1eb       kube-controller-manager-ha-550225   kube-system
	8290015e8c15e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   5 minutes ago       Running             kube-apiserver            10                  b2181fe55e225       kube-apiserver-ha-550225            kube-system
	29394f92b6a36       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   10                  19a2185d4a1eb       kube-controller-manager-ha-550225   kube-system
	5b0c0535da780       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            9                   b2181fe55e225       kube-apiserver-ha-550225            kube-system
	5f7223d3b4009       27aa99ef07bb63db109cae7189f6029203a1ba86e8d201ca72eb836e3cdd0b43   7 minutes ago       Running             kube-vip                  1                   c455a5f1f2468       kube-vip-ha-550225                  kube-system
	43f493b22d959       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      3                   8c156781bf4ef       etcd-ha-550225                      kube-system
	2b4cd729501f6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            2                   b0329f645e59c       kube-scheduler-ha-550225            kube-system
	
	
	==> coredns [3269c04f5498e2befbc42b6cf2cdbe83a291623d3fde767dc07389c7422afd48] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50597 - 50866 "HINFO IN 2471821353559588233.5453610813505731232. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027203243s
	
	
	==> coredns [97a0ea46cf7f751b62a77918089760dd2e292198c9c2fc951fc282e4636ba492] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56369 - 30635 "HINFO IN 7137530019898463004.8479900960678889237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.018878387s
	[INFO] 127.0.0.1:38056 - 50955 "HINFO IN 7137530019898463004.8479900960678889237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041678969s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-550225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_03_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:02:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:24:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:03:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-550225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 804fc56d691a47babcd58cd3553282d3
	  System UUID:                96b9796d-f076-4bf0-ac0e-2eccc9d5873e
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gph4b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-66bc5c9577-7gnh8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 coredns-66bc5c9577-s6dq8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 etcd-ha-550225                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-v7wnc                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-550225             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-550225    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-skqs2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-550225             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-550225                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 21m                    kube-proxy       
	  Normal   Starting                 4m57s                  kube-proxy       
	  Normal   Starting                 21m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 21m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)      kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     21m (x8 over 21m)      kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)      kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasNoDiskPressure    21m                    kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                    kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21m                    kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   Starting                 21m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 21m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           21m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   RegisteredNode           21m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   NodeReady                21m                    kubelet          Node ha-550225 status is now: NodeReady
	  Normal   RegisteredNode           19m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   Starting                 7m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m52s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m52s (x8 over 7m52s)  kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m52s (x8 over 7m52s)  kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m52s (x8 over 7m52s)  kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m43s                  node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	
	
	Name:               ha-550225-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_03_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:03:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:21 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-550225-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 08dcc5805aac4edbab34bc4710db5eef
	  System UUID:                c6a05e31-956b-4e2f-af6e-62090982b7b4
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wbl7l                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-550225-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-n6kwf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-550225-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-550225-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-jkkmq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-550225-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-550225-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   RegisteredNode           21m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           21m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           19m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           5m43s              node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   NodeNotReady             4m53s              node-controller  Node ha-550225-m02 status is now: NodeNotReady
	
	
	Name:               ha-550225-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_04_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:04:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-550225-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 315218fdc78646b99ded6becf46edf67
	  System UUID:                4ea95856-3488-4a4f-b299-e71342dd8d89
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q95k5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-550225-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-2w4k5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-550225-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-550225-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-2k945                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-550225-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-550225-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        19m    kube-proxy       
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  16m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  5m43s  node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  NodeNotReady    4m53s  node-controller  Node ha-550225-m03 status is now: NodeNotReady
	
	
	Name:               ha-550225-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_06_15_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:06:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-550225-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 4bfee30c7b434881a054adc06b7ffd73
	  System UUID:                9c87cedb-25ad-496a-a907-0c95201b1fe7
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2h5qc       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-proxy-gf52r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeHasSufficientMemory  18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-550225-m04 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  RegisteredNode           5m43s              node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeNotReady             4m53s              node-controller  Node ha-550225-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 06:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 06:49] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:03] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:16] overlayfs: idmapped layers are currently not supported
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [43f493b22d959eb4018498d0af4c8a03328857db3567f13cb0ffaee9ec06c00b] <==
	{"level":"warn","ts":"2025-10-02T07:24:43.454494Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.457490Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.462044Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.472800Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.479202Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.483200Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.487375Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.495782Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.498891Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.502151Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.517513Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.518251Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.528137Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.534040Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.537612Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.563739Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.572013Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.578753Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.580305Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.583958Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.587691Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.591459Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.600945Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.610121Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:43.678785Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 07:24:43 up  2:07,  0 user,  load average: 1.49, 1.02, 1.15
	Linux ha-550225 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a9ee715e43431e349cf8c9be623f1a296d01184f3204e6a4a0f8394fc70358e] <==
	I1002 07:24:08.213350       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:18.212188       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:18.212287       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:18.212500       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:18.212609       1 main.go:301] handling current node
	I1002 07:24:18.212650       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:18.212683       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:18.213031       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:18.215444       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:28.207379       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:28.207511       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:28.207747       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:28.207827       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:28.207968       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:28.208017       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:28.208188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:28.208240       1 main.go:301] handling current node
	I1002 07:24:38.211259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:38.211291       1 main.go:301] handling current node
	I1002 07:24:38.211307       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:38.211313       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:38.211454       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:38.211461       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:38.211513       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:38.211519       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5b0c0535da7807f278c4629073d71180fc43a369ddae7136c7ffd515a7e95c6b] <==
	I1002 07:18:00.892979       1 server.go:150] Version: v1.34.1
	I1002 07:18:00.893076       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1002 07:18:02.015138       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1002 07:18:02.015252       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1002 07:18:02.015284       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1002 07:18:02.015315       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1002 07:18:02.015348       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1002 07:18:02.015382       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1002 07:18:02.015415       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1002 07:18:02.015448       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1002 07:18:02.015481       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1002 07:18:02.015512       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1002 07:18:02.015544       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1002 07:18:02.015575       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1002 07:18:02.033014       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:18:02.034577       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1002 07:18:02.035335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1002 07:18:02.045748       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:18:02.056978       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1002 07:18:02.057010       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1002 07:18:02.057337       1 instance.go:239] Using reconciler: lease
	W1002 07:18:02.058416       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:18:22.032470       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:18:22.034569       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1002 07:18:22.058050       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8290015e8c15e01397448ee79ef46f66d0ddd62579c46b3fd334baf073a9d6bc] <==
	I1002 07:18:54.901508       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 07:18:54.914584       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 07:18:54.914862       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:18:54.917776       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:18:54.920456       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 07:18:54.921448       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:18:54.921690       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 07:18:54.935006       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:18:54.935120       1 policy_source.go:240] refreshing policies
	I1002 07:18:54.936177       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:18:54.995047       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:18:54.995073       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:18:55.006144       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 07:18:55.006401       1 aggregator.go:171] initial CRD sync complete...
	I1002 07:18:55.006443       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:18:55.006472       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:18:55.006502       1 cache.go:39] Caches are synced for autoregister controller
	I1002 07:18:55.693729       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:18:55.915859       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 07:18:56.852268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 07:18:56.854341       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:18:56.866097       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:19:00.445840       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:19:00.449414       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:19:00.588914       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0dcd791f01f43325da7d666b2308b7e9e8afd6c81f0dce7b635d6b6e5e8a9df1] <==
	I1002 07:19:00.416685       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:19:00.422763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:19:00.422858       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 07:19:00.422891       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 07:19:00.429174       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 07:19:00.430239       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 07:19:00.434548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:19:00.434793       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 07:19:00.434939       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:19:00.434988       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:19:00.435000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:19:00.435011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:19:00.435027       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:19:00.436974       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:19:00.437153       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:19:00.437213       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:19:00.437246       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:19:00.437276       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:19:00.440308       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:19:00.441271       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 07:19:00.447203       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:19:00.447327       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:19:00.447774       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-550225-m04"
	I1002 07:19:50.432665       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-550225-m04"
	I1002 07:19:50.870389       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	
	
	==> kube-controller-manager [29394f92b6a368bb1845ecb24b6cebce9a3e6e6816e60bf240997292037f264a] <==
	I1002 07:18:16.059120       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:18:17.185952       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 07:18:17.185981       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:18:17.187402       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 07:18:17.187586       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 07:18:17.187839       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 07:18:17.187927       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:18:33.066017       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [5051222f30f0ae589e47ad3f24adc858d48fe99da320fc5495aa8189ecc36596] <==
	I1002 07:19:45.951789       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:19:46.028809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:19:46.129896       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:19:46.129933       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:19:46.130000       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:19:46.150308       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:19:46.150378       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:19:46.154018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:19:46.154343       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:19:46.154416       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:19:46.157478       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:19:46.157553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:19:46.157874       1 config.go:200] "Starting service config controller"
	I1002 07:19:46.157918       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:19:46.158250       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:19:46.158295       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:19:46.158742       1 config.go:309] "Starting node config controller"
	I1002 07:19:46.158794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:19:46.158824       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:19:46.258046       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:19:46.258051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:19:46.258406       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2b4cd729501f68e709fb29b74cdf4d89db019e465f669755a276bbd13dfa365d] <==
	E1002 07:17:57.915557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:17:59.343245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:18:17.475604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:18:19.476430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:18:20.523426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:18:20.961075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:18:21.209835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:18:22.175039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:18:23.065717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33332->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:18:23.065828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33338->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:18:23.065904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33346->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:18:23.066085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33356->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:18:23.066195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48896->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:18:23.066285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33302->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:18:23.066377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33316->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:18:23.066451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33400->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:18:23.067303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33366->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:18:23.067355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48888->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:18:23.067419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48872->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:18:23.067516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48892->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:18:23.067591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33382->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:18:50.334725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:18:54.767637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:18:54.767804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 07:18:55.890008       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:19:21 ha-550225 kubelet[753]: E1002 07:19:21.811346     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(f74a25ae-35bd-44b0-84a9-50a5df5dec1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:21 ha-550225 kubelet[753]: E1002 07:19:21.811400     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="f74a25ae-35bd-44b0-84a9-50a5df5dec1d"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.810797     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-gph4b_default(193a390b-ce6f-4e39-afcc-7ee671deb0a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.810843     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-gph4b" podUID="193a390b-ce6f-4e39-afcc-7ee671deb0a1"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.811359     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-s6dq8_kube-system(7626557b-e8fe-419b-b447-994cfa9b0f07): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.811895     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-s6dq8" podUID="7626557b-e8fe-419b-b447-994cfa9b0f07"
	Oct 02 07:19:23 ha-550225 kubelet[753]: E1002 07:19:23.811789     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-v7wnc_kube-system(b011ceef-f3c8-4142-8385-b09113581770): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:23 ha-550225 kubelet[753]: E1002 07:19:23.811826     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-v7wnc" podUID="b011ceef-f3c8-4142-8385-b09113581770"
	Oct 02 07:19:24 ha-550225 kubelet[753]: E1002 07:19:24.810191     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-7gnh8_kube-system(55461d93-6678-4e2e-8b48-7d26628c1cf9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:24 ha-550225 kubelet[753]: E1002 07:19:24.810240     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-7gnh8" podUID="55461d93-6678-4e2e-8b48-7d26628c1cf9"
	Oct 02 07:19:31 ha-550225 kubelet[753]: E1002 07:19:31.812684     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-skqs2_kube-system(d5f2a06e-009a-4c94-aee4-c6d515d1a38b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:31 ha-550225 kubelet[753]: E1002 07:19:31.812750     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-skqs2" podUID="d5f2a06e-009a-4c94-aee4-c6d515d1a38b"
	Oct 02 07:19:32 ha-550225 kubelet[753]: E1002 07:19:32.810908     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(f74a25ae-35bd-44b0-84a9-50a5df5dec1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:32 ha-550225 kubelet[753]: E1002 07:19:32.811030     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="f74a25ae-35bd-44b0-84a9-50a5df5dec1d"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812380     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-s6dq8_kube-system(7626557b-e8fe-419b-b447-994cfa9b0f07): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812427     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-s6dq8" podUID="7626557b-e8fe-419b-b447-994cfa9b0f07"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812402     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-gph4b_default(193a390b-ce6f-4e39-afcc-7ee671deb0a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812917     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-v7wnc_kube-system(b011ceef-f3c8-4142-8385-b09113581770): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.814141     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-v7wnc" podUID="b011ceef-f3c8-4142-8385-b09113581770"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.814168     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-gph4b" podUID="193a390b-ce6f-4e39-afcc-7ee671deb0a1"
	Oct 02 07:19:51 ha-550225 kubelet[753]: E1002 07:19:51.724599     753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d\": container with ID starting with 15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d not found: ID does not exist" containerID="15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d"
	Oct 02 07:19:51 ha-550225 kubelet[753]: I1002 07:19:51.724702     753 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d" err="rpc error: code = NotFound desc = could not find container \"15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d\": container with ID starting with 15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d not found: ID does not exist"
	Oct 02 07:19:51 ha-550225 kubelet[753]: E1002 07:19:51.725359     753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04\": container with ID starting with c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04 not found: ID does not exist" containerID="c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04"
	Oct 02 07:19:51 ha-550225 kubelet[753]: I1002 07:19:51.725398     753 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04" err="rpc error: code = NotFound desc = could not find container \"c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04\": container with ID starting with c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04 not found: ID does not exist"
	Oct 02 07:20:16 ha-550225 kubelet[753]: I1002 07:20:16.460466     753 scope.go:117] "RemoveContainer" containerID="48fccb25ba33b3850afc1ffdf5ca13f71673b1d992497dbcadf93bdbc8bdee4c"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225
helpers_test.go:269: (dbg) Run:  kubectl --context ha-550225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (5.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (4.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-550225 node add --control-plane --alsologtostderr -v 5: exit status 83 (179.2206ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-550225-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-550225"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:24:46.210925  363453 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:24:46.211126  363453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:24:46.211160  363453 out.go:374] Setting ErrFile to fd 2...
	I1002 07:24:46.211182  363453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:24:46.211450  363453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:24:46.211819  363453 mustload.go:65] Loading cluster: ha-550225
	I1002 07:24:46.212288  363453 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:24:46.213017  363453 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:24:46.230594  363453 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:24:46.231163  363453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:24:46.288995  363453 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:24:46.279283812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:24:46.289386  363453 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:24:46.306662  363453 host.go:66] Checking if "ha-550225-m02" exists ...
	I1002 07:24:46.307258  363453 cli_runner.go:164] Run: docker container inspect ha-550225-m03 --format={{.State.Status}}
	I1002 07:24:46.327155  363453 out.go:179] * The control-plane node ha-550225-m03 host is not running: state=Stopped
	I1002 07:24:46.330121  363453 out.go:179]   To start a cluster, run: "minikube start -p ha-550225"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-arm64 -p ha-550225 node add --control-plane --alsologtostderr -v 5" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-550225
helpers_test.go:243: (dbg) docker inspect ha-550225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	        "Created": "2025-10-02T07:02:30.539981852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:16:43.830280649Z",
	            "FinishedAt": "2025-10-02T07:16:42.559270036Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c-json.log",
	        "Name": "/ha-550225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-550225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-550225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	                "LowerDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-550225",
	                "Source": "/var/lib/docker/volumes/ha-550225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-550225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-550225",
	                "name.minikube.sigs.k8s.io": "ha-550225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afa0a4e6ee5917c0a800a9abfad94a173555b01d2438c9506474ee7c27ad6564",
	            "SandboxKey": "/var/run/docker/netns/afa0a4e6ee59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-550225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:f4:60:b8:9c:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a294cab4b5d50d5f227902c62678f378fbede9275f1d54f0b3de7a1f36e1a0",
	                    "EndpointID": "e0227cbf31cf607a461ab665f3bdb5d5d554f27df511a468e38aecbd366c38c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-550225",
	                        "1c1f8ec53310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 logs -n 25: (2.224377935s)
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp testdata/cp-test.txt ha-550225-m04:/home/docker/cp-test.txt                                                             │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m04.txt │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m04_ha-550225.txt                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225.txt                                                 │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node start m02 --alsologtostderr -v 5                                                                                      │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:08 UTC │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │ 02 Oct 25 07:08 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5                                                                                   │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ node    │ ha-550225 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │ 02 Oct 25 07:16 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ node    │ ha-550225 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:16:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:16:43.556654  346554 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:16:43.556900  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.556935  346554 out.go:374] Setting ErrFile to fd 2...
	I1002 07:16:43.556957  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.557253  346554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:16:43.557663  346554 out.go:368] Setting JSON to false
	I1002 07:16:43.558546  346554 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7155,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:16:43.558645  346554 start.go:140] virtualization:  
	I1002 07:16:43.562097  346554 out.go:179] * [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:16:43.565995  346554 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:16:43.566065  346554 notify.go:220] Checking for updates...
	I1002 07:16:43.572511  346554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:16:43.575317  346554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:43.578176  346554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:16:43.580964  346554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:16:43.583787  346554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:16:43.587186  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:43.587749  346554 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:16:43.619258  346554 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:16:43.619425  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.676323  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.665454213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.676450  346554 docker.go:318] overlay module found
	I1002 07:16:43.679463  346554 out.go:179] * Using the docker driver based on existing profile
	I1002 07:16:43.682328  346554 start.go:304] selected driver: docker
	I1002 07:16:43.682357  346554 start.go:924] validating driver "docker" against &{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.682550  346554 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:16:43.682661  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.739766  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.730208669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.740206  346554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:16:43.740241  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:43.740306  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:43.740357  346554 start.go:348] cluster config:
	{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.743601  346554 out.go:179] * Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	I1002 07:16:43.746399  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:43.749341  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:43.752288  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:43.752352  346554 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:16:43.752374  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:43.752377  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:43.752484  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:43.752495  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:43.752642  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:43.772750  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:43.772775  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:43.772803  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:43.772827  346554 start.go:360] acquireMachinesLock for ha-550225: {Name:mkc1f009b4f35f6b87d580d72d0a621c44a033f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:43.772899  346554 start.go:364] duration metric: took 46.236µs to acquireMachinesLock for "ha-550225"
	I1002 07:16:43.772922  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:43.772934  346554 fix.go:54] fixHost starting: 
	I1002 07:16:43.773187  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:43.794446  346554 fix.go:112] recreateIfNeeded on ha-550225: state=Stopped err=<nil>
	W1002 07:16:43.794478  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:43.797824  346554 out.go:252] * Restarting existing docker container for "ha-550225" ...
	I1002 07:16:43.797912  346554 cli_runner.go:164] Run: docker start ha-550225
	I1002 07:16:44.052064  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:44.071577  346554 kic.go:430] container "ha-550225" state is running.
	I1002 07:16:44.071977  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:44.097000  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:44.097247  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:44.097316  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:44.119603  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:44.120087  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:44.120103  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:44.120661  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57572->127.0.0.1:33188: read: connection reset by peer
	I1002 07:16:47.250760  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.250786  346554 ubuntu.go:182] provisioning hostname "ha-550225"
	I1002 07:16:47.250888  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.268212  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.268525  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.268543  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225 && echo "ha-550225" | sudo tee /etc/hostname
	I1002 07:16:47.408749  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.408837  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.428229  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.428559  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.428582  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:47.563394  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:47.563422  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:47.563445  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:47.563480  346554 provision.go:84] configureAuth start
	I1002 07:16:47.563555  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:47.583742  346554 provision.go:143] copyHostCerts
	I1002 07:16:47.583804  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583843  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:47.583865  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583942  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:47.584044  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584067  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:47.584076  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584105  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:47.584165  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584188  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:47.584197  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584232  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:47.584294  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225 san=[127.0.0.1 192.168.49.2 ha-550225 localhost minikube]
	I1002 07:16:49.085710  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:49.085804  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:49.085919  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.102600  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.203033  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:49.203111  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:49.220709  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:49.220773  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:16:49.238283  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:49.238380  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:49.255763  346554 provision.go:87] duration metric: took 1.692265184s to configureAuth
	I1002 07:16:49.255832  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:49.256105  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:49.256221  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.273296  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:49.273613  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:49.273636  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:49.545258  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:49.545281  346554 machine.go:96] duration metric: took 5.448016594s to provisionDockerMachine
	I1002 07:16:49.545292  346554 start.go:293] postStartSetup for "ha-550225" (driver="docker")
	I1002 07:16:49.545335  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:49.545400  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:49.545448  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.562765  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.663440  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:49.667012  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:49.667043  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:49.667055  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:49.667131  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:49.667227  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:49.667243  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:49.667356  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:49.675157  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:49.693566  346554 start.go:296] duration metric: took 148.259083ms for postStartSetup
	I1002 07:16:49.693674  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:49.693733  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.711628  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.808263  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:49.813222  346554 fix.go:56] duration metric: took 6.040285845s for fixHost
	I1002 07:16:49.813250  346554 start.go:83] releasing machines lock for "ha-550225", held for 6.040338171s
	I1002 07:16:49.813321  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:49.832086  346554 ssh_runner.go:195] Run: cat /version.json
	I1002 07:16:49.832138  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.832170  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:49.832223  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.860178  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.874339  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.958866  346554 ssh_runner.go:195] Run: systemctl --version
	I1002 07:16:50.049981  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:50.088401  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:50.093782  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:50.093888  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:50.102679  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:50.102707  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:50.102739  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:50.102790  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:50.119025  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:50.132406  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:50.132508  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:50.147702  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:50.161840  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:50.285662  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:16:50.412243  346554 docker.go:234] disabling docker service ...
	I1002 07:16:50.412358  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:16:50.429880  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:16:50.443435  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:16:50.570143  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:16:50.705200  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:16:50.718349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:16:50.732391  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:16:50.732489  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.741688  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:16:50.741842  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.751301  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.760089  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.769286  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:16:50.777484  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.786723  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.795606  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.804393  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:16:50.812287  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:16:50.819774  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:50.940841  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:16:51.084825  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:16:51.084933  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:16:51.088952  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:16:51.089022  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:16:51.093255  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:16:51.121871  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:16:51.122035  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.151306  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.186151  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:16:51.188993  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:16:51.205719  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:16:51.209600  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.219722  346554 kubeadm.go:883] updating cluster {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:16:51.219870  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:51.219932  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.259348  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.259373  346554 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:16:51.259435  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.285823  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.285850  346554 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:16:51.285860  346554 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:16:51.285975  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:16:51.286067  346554 ssh_runner.go:195] Run: crio config
	I1002 07:16:51.349840  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:51.349864  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:51.349907  346554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:16:51.349941  346554 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-550225 NodeName:ha-550225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:16:51.350123  346554 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-550225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:16:51.350149  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:16:51.350220  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:16:51.362455  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:51.362590  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:16:51.362683  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:16:51.370716  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:16:51.370824  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 07:16:51.378562  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:16:51.392384  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:16:51.405890  346554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1002 07:16:51.418852  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:16:51.431748  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:16:51.435456  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.445200  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:51.564279  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:16:51.580309  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.2
	I1002 07:16:51.580335  346554 certs.go:195] generating shared ca certs ...
	I1002 07:16:51.580352  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:51.580577  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:16:51.580643  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:16:51.580658  346554 certs.go:257] generating profile certs ...
	I1002 07:16:51.580760  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:16:51.580851  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa
	I1002 07:16:51.580915  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:16:51.580931  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:16:51.580960  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:16:51.580981  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:16:51.581001  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:16:51.581029  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:16:51.581060  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:16:51.581082  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:16:51.581099  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:16:51.581172  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:16:51.581223  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:16:51.581238  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:16:51.581269  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:16:51.581323  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:16:51.581355  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:16:51.581425  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:51.581476  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.581497  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.581511  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.582046  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:16:51.608528  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:16:51.630032  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:16:51.651693  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:16:51.672816  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:16:51.694334  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:16:51.713045  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:16:51.734929  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:16:51.759074  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:16:51.783798  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:16:51.810129  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:16:51.829572  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:16:51.844038  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:16:51.850521  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:16:51.859107  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863052  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863200  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.905139  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:16:51.915686  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:16:51.924646  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928631  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928697  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.970474  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:16:51.979037  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:16:51.988282  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992329  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992400  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:16:52.034608  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:16:52.043437  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:16:52.047807  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:16:52.090171  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:16:52.132189  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:16:52.173672  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:16:52.215246  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:16:52.259493  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:16:52.303359  346554 kubeadm.go:400] StartCluster: {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:52.303541  346554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:16:52.303637  346554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:16:52.411948  346554 cri.go:89] found id: ""
	I1002 07:16:52.412087  346554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:16:52.423926  346554 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:16:52.423985  346554 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:16:52.424072  346554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:16:52.435971  346554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:52.436519  346554 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-550225" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.436691  346554 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-550225" cluster setting kubeconfig missing "ha-550225" context setting]
	I1002 07:16:52.436999  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.437624  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:16:52.438178  346554 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:16:52.438372  346554 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:16:52.438396  346554 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:16:52.438439  346554 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:16:52.438479  346554 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:16:52.438242  346554 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:16:52.438946  346554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:16:52.453843  346554 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:16:52.453908  346554 kubeadm.go:601] duration metric: took 29.902711ms to restartPrimaryControlPlane
	I1002 07:16:52.454041  346554 kubeadm.go:402] duration metric: took 150.691034ms to StartCluster
	I1002 07:16:52.454081  346554 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.454172  346554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.454859  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.455192  346554 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:16:52.455245  346554 start.go:241] waiting for startup goroutines ...
	I1002 07:16:52.455279  346554 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:16:52.455778  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.480332  346554 out.go:179] * Enabled addons: 
	I1002 07:16:52.484238  346554 addons.go:514] duration metric: took 28.941955ms for enable addons: enabled=[]
	I1002 07:16:52.484336  346554 start.go:246] waiting for cluster config update ...
	I1002 07:16:52.484369  346554 start.go:255] writing updated cluster config ...
	I1002 07:16:52.488274  346554 out.go:203] 
	I1002 07:16:52.492458  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.492645  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.496127  346554 out.go:179] * Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	I1002 07:16:52.499195  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:52.502435  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:52.505497  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:52.505566  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:52.505677  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:52.505807  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:52.505838  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:52.506003  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.530361  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:52.530380  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:52.530392  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:52.530415  346554 start.go:360] acquireMachinesLock for ha-550225-m02: {Name:mk11ef625bc214163cbeacdb736ddec4214a8374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:52.530475  346554 start.go:364] duration metric: took 37.3µs to acquireMachinesLock for "ha-550225-m02"
	I1002 07:16:52.530499  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:52.530506  346554 fix.go:54] fixHost starting: m02
	I1002 07:16:52.530790  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:52.559198  346554 fix.go:112] recreateIfNeeded on ha-550225-m02: state=Stopped err=<nil>
	W1002 07:16:52.559226  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:52.563143  346554 out.go:252] * Restarting existing docker container for "ha-550225-m02" ...
	I1002 07:16:52.563247  346554 cli_runner.go:164] Run: docker start ha-550225-m02
	I1002 07:16:52.985736  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:53.019972  346554 kic.go:430] container "ha-550225-m02" state is running.
	I1002 07:16:53.020350  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:53.045172  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:53.045437  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:53.045501  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:53.087166  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:53.087519  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:53.087528  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:53.088138  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45188->127.0.0.1:33193: read: connection reset by peer
	I1002 07:16:56.311713  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.311782  346554 ubuntu.go:182] provisioning hostname "ha-550225-m02"
	I1002 07:16:56.311878  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.344609  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.344917  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.344929  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225-m02 && echo "ha-550225-m02" | sudo tee /etc/hostname
	I1002 07:16:56.639669  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.639788  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.668649  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.668967  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.668991  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:56.892812  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:56.892848  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:56.892865  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:56.892886  346554 provision.go:84] configureAuth start
	I1002 07:16:56.892966  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:56.931268  346554 provision.go:143] copyHostCerts
	I1002 07:16:56.931313  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931346  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:56.931357  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931436  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:56.931520  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931541  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:56.931548  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931576  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:56.931619  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931640  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:56.931645  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931673  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:56.931727  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225-m02 san=[127.0.0.1 192.168.49.3 ha-550225-m02 localhost minikube]
	I1002 07:16:57.380087  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:57.380161  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:57.380209  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.399377  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:57.503607  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:57.503674  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:57.534864  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:57.534935  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:16:57.579624  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:57.579686  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:57.613798  346554 provision.go:87] duration metric: took 720.891298ms to configureAuth
	I1002 07:16:57.613866  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:57.614125  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:57.614268  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.655334  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:57.655649  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:57.655669  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:58.296218  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:58.296241  346554 machine.go:96] duration metric: took 5.250794733s to provisionDockerMachine
	I1002 07:16:58.296266  346554 start.go:293] postStartSetup for "ha-550225-m02" (driver="docker")
	I1002 07:16:58.296279  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:58.296361  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:58.296407  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.334246  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.454625  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:58.462912  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:58.462946  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:58.462957  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:58.463024  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:58.463132  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:58.463146  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:58.463245  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:58.476350  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:58.502934  346554 start.go:296] duration metric: took 206.651168ms for postStartSetup
	I1002 07:16:58.503074  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:58.503140  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.541010  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.704044  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:58.724725  346554 fix.go:56] duration metric: took 6.194210695s for fixHost
	I1002 07:16:58.724751  346554 start.go:83] releasing machines lock for "ha-550225-m02", held for 6.194264053s
	I1002 07:16:58.724830  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:58.757236  346554 out.go:179] * Found network options:
	I1002 07:16:58.760259  346554 out.go:179]   - NO_PROXY=192.168.49.2
	W1002 07:16:58.763701  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	W1002 07:16:58.763752  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	I1002 07:16:58.763820  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:58.763852  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:58.763870  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.763907  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.799805  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.800051  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:59.297366  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:59.320265  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:59.320354  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:59.335012  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:59.335039  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:59.335070  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:59.335161  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:59.357972  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:59.378445  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:59.378521  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:59.402692  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:59.423049  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:59.777657  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:17:00.088553  346554 docker.go:234] disabling docker service ...
	I1002 07:17:00.088656  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:17:00.130593  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:17:00.210008  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:17:00.633988  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:17:01.021589  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:17:01.054167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:17:01.092894  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:17:01.092980  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.111830  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:17:01.111928  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.139965  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.151897  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.168595  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:17:01.186410  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.204646  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.221763  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.236700  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:17:01.257944  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:17:01.272835  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:17:01.618372  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:18:32.051852  346554 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.433435555s)
	I1002 07:18:32.051878  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:18:32.051938  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:18:32.056156  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:18:32.056222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:18:32.060117  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:18:32.088770  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:18:32.088860  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.119432  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.154051  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:18:32.156909  346554 out.go:179]   - env NO_PROXY=192.168.49.2
	I1002 07:18:32.159957  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:18:32.177164  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:18:32.181230  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:32.191471  346554 mustload.go:65] Loading cluster: ha-550225
	I1002 07:18:32.191729  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:32.191999  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:18:32.209130  346554 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:18:32.209416  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.3
	I1002 07:18:32.209433  346554 certs.go:195] generating shared ca certs ...
	I1002 07:18:32.209448  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:18:32.209574  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:18:32.209622  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:18:32.209635  346554 certs.go:257] generating profile certs ...
	I1002 07:18:32.209712  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:18:32.209761  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.e172f685
	I1002 07:18:32.209802  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:18:32.209816  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:18:32.209829  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:18:32.209843  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:18:32.209855  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:18:32.209869  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:18:32.209883  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:18:32.209898  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:18:32.209908  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:18:32.209964  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:18:32.209998  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:18:32.210010  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:18:32.210033  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:18:32.210061  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:18:32.210089  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:18:32.210137  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:18:32.210168  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.210187  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.210198  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.210261  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:18:32.227689  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:18:32.315413  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1002 07:18:32.319445  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1002 07:18:32.328111  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1002 07:18:32.331777  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1002 07:18:32.340081  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1002 07:18:32.343746  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1002 07:18:32.351558  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1002 07:18:32.354911  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1002 07:18:32.362878  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1002 07:18:32.366632  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1002 07:18:32.374581  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1002 07:18:32.378281  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1002 07:18:32.386552  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:18:32.405394  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:18:32.422759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:18:32.440360  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:18:32.457759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:18:32.475843  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:18:32.493288  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:18:32.510289  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:18:32.527991  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:18:32.545549  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:18:32.562952  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:18:32.580383  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1002 07:18:32.593477  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1002 07:18:32.606933  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1002 07:18:32.619772  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1002 07:18:32.634020  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1002 07:18:32.646873  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1002 07:18:32.659836  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1002 07:18:32.673417  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:18:32.679719  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:18:32.688081  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692003  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692135  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.733286  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:18:32.741334  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:18:32.749624  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753431  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753505  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.794364  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:18:32.802247  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:18:32.810290  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813847  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813927  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.854739  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:18:32.862471  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:18:32.866281  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:18:32.907787  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:18:32.948617  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:18:32.989448  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:18:33.030881  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:18:33.074016  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:18:33.117026  346554 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1002 07:18:33.117170  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:18:33.117220  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:18:33.117288  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:18:33.133837  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:18:33.133931  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:18:33.134029  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:18:33.142503  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:18:33.142627  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1002 07:18:33.150436  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 07:18:33.163196  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:18:33.176800  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:18:33.191119  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:18:33.195012  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:33.205076  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.339361  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.353170  346554 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:18:33.353495  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:33.359500  346554 out.go:179] * Verifying Kubernetes components...
	I1002 07:18:33.362288  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.491257  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.505467  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1002 07:18:33.505560  346554 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1002 07:18:33.505989  346554 node_ready.go:35] waiting up to 6m0s for node "ha-550225-m02" to be "Ready" ...
	W1002 07:18:35.506749  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:38.010468  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:40.016084  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:42.506872  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:44.507212  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:47.007659  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:49.506544  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:51.506605  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:18:54.785251  346554 node_ready.go:49] node "ha-550225-m02" is "Ready"
	I1002 07:18:54.785285  346554 node_ready.go:38] duration metric: took 21.279267345s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:18:54.785300  346554 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:18:54.785382  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.286257  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.786278  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.285480  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.785495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.286432  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.786472  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.285596  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.786260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.286148  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.785674  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.286401  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.786468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.286310  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.786133  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.285476  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.285578  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.785477  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.285835  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.786152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.285495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.785558  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.285602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.785496  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.286468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.786358  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.286294  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.786349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.286208  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.786292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.285577  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.785589  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.286341  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.286415  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.786007  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.286205  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.786328  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.285849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.786397  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.285488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.785431  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.285445  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.785468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.285527  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.785637  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.285535  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.786137  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.286152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.786052  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.785522  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.285716  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.786849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.286372  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.786418  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.286092  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.786120  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.285506  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.785439  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.286469  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.785780  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.785611  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.286260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.785499  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.285509  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.785521  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.285762  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.786049  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.286329  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.785543  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.285473  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.786013  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.285818  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.785931  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.285557  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.786122  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:33.786216  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:33.819648  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:33.819668  346554 cri.go:89] found id: ""
	I1002 07:19:33.819678  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:33.819746  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.823889  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:33.823960  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:33.855251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:33.855272  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:33.855277  346554 cri.go:89] found id: ""
	I1002 07:19:33.855285  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:33.855351  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.858992  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.862888  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:33.862975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:33.894144  346554 cri.go:89] found id: ""
	I1002 07:19:33.894169  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.894178  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:33.894184  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:33.894243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:33.921104  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:33.921125  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:33.921130  346554 cri.go:89] found id: ""
	I1002 07:19:33.921137  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:33.921194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.925016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.928536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:33.928631  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:33.961082  346554 cri.go:89] found id: ""
	I1002 07:19:33.961111  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.961121  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:33.961127  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:33.961187  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:33.993876  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:33.993901  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:33.993906  346554 cri.go:89] found id: ""
	I1002 07:19:33.993916  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:33.993979  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.999741  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:34.004783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:34.004869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:34.034228  346554 cri.go:89] found id: ""
	I1002 07:19:34.034256  346554 logs.go:282] 0 containers: []
	W1002 07:19:34.034265  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:34.034275  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:34.034288  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:34.096737  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:34.096779  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:34.132301  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:34.132339  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:34.182701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:34.182737  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:34.217015  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:34.217044  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:34.232712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:34.232741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:34.652633  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:34.652655  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:34.652669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:34.681086  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:34.681118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:34.708033  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:34.708062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:34.793299  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:34.793407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:34.848620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:34.848649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:34.948533  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:34.948572  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.477483  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:37.488961  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:37.489035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:37.518325  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.518349  346554 cri.go:89] found id: ""
	I1002 07:19:37.518358  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:37.518419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.522140  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:37.522269  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:37.549073  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.549093  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.549098  346554 cri.go:89] found id: ""
	I1002 07:19:37.549105  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:37.549190  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.552869  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.556417  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:37.556497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:37.589096  346554 cri.go:89] found id: ""
	I1002 07:19:37.589122  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.589130  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:37.589137  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:37.589199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:37.615330  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:37.615354  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.615360  346554 cri.go:89] found id: ""
	I1002 07:19:37.615367  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:37.615424  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.619166  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.622673  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:37.622742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:37.648426  346554 cri.go:89] found id: ""
	I1002 07:19:37.648458  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.648467  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:37.648474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:37.648536  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:37.676515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:37.676536  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:37.676541  346554 cri.go:89] found id: ""
	I1002 07:19:37.676549  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:37.676605  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.680280  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.684478  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:37.684552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:37.710689  346554 cri.go:89] found id: ""
	I1002 07:19:37.710713  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.710722  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:37.710731  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:37.710741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:37.807134  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:37.807171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:37.877814  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:37.877839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:37.877853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.920820  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:37.920854  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.956765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:37.956802  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.985482  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:37.985510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:38.017517  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:38.017548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:38.100846  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:38.100884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:38.136290  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:38.136318  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:38.151732  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:38.151763  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:38.177792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:38.177822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:38.229226  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:38.229260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.756410  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:40.767378  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:40.767448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:40.799187  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:40.799205  346554 cri.go:89] found id: ""
	I1002 07:19:40.799213  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:40.799268  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.804369  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:40.804454  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:40.830559  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:40.830628  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:40.830652  346554 cri.go:89] found id: ""
	I1002 07:19:40.830679  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:40.830771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.835205  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.839714  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:40.839827  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:40.867014  346554 cri.go:89] found id: ""
	I1002 07:19:40.867039  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.867048  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:40.867054  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:40.867141  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:40.905810  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:40.905829  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:40.905835  346554 cri.go:89] found id: ""
	I1002 07:19:40.905842  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:40.905898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.909648  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.913397  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:40.913471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:40.940488  346554 cri.go:89] found id: ""
	I1002 07:19:40.940511  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.940520  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:40.940526  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:40.940585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:40.968408  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:40.968429  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.968439  346554 cri.go:89] found id: ""
	I1002 07:19:40.968447  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:40.968503  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.972336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.976070  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:40.976163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:41.010288  346554 cri.go:89] found id: ""
	I1002 07:19:41.010318  346554 logs.go:282] 0 containers: []
	W1002 07:19:41.010328  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:41.010338  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:41.010353  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:41.058706  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:41.058741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:41.085223  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:41.085252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:41.117537  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:41.117564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:41.218224  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:41.218265  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:41.234686  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:41.234727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:41.270240  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:41.270276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:41.321885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:41.321922  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:41.350649  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:41.350684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:41.382710  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:41.382740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:41.465872  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:41.465911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:41.547196  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:41.547220  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:41.547234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.074126  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:44.087746  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:44.087861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:44.116198  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.116223  346554 cri.go:89] found id: ""
	I1002 07:19:44.116232  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:44.116290  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.120227  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:44.120325  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:44.146916  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.146943  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.146948  346554 cri.go:89] found id: ""
	I1002 07:19:44.146955  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:44.147009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.151266  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.155925  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:44.156012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:44.190430  346554 cri.go:89] found id: ""
	I1002 07:19:44.190458  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.190467  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:44.190473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:44.190529  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:44.219366  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.219387  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.219392  346554 cri.go:89] found id: ""
	I1002 07:19:44.219400  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:44.219455  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.223324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.226924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:44.227000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:44.252543  346554 cri.go:89] found id: ""
	I1002 07:19:44.252566  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.252576  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:44.252583  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:44.252650  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:44.280466  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.280489  346554 cri.go:89] found id: ""
	I1002 07:19:44.280498  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:44.280559  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.284050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:44.284122  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:44.314223  346554 cri.go:89] found id: ""
	I1002 07:19:44.314250  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.314259  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:44.314269  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:44.314304  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.340933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:44.340965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.377320  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:44.377352  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.411349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:44.411377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:44.516647  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:44.516695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:44.585736  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:44.585771  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:44.585785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.629867  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:44.629909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.681709  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:44.681750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.710536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:44.710566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:44.801698  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:44.801744  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:44.834146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:44.834175  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:47.351602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:47.362458  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:47.362546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:47.391769  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.391792  346554 cri.go:89] found id: ""
	I1002 07:19:47.391802  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:47.391863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.395882  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:47.395971  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:47.428129  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:47.428151  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.428156  346554 cri.go:89] found id: ""
	I1002 07:19:47.428164  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:47.428225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.432313  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.436344  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:47.436415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:47.464208  346554 cri.go:89] found id: ""
	I1002 07:19:47.464230  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.464238  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:47.464244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:47.464302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:47.494674  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.494731  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:47.494773  346554 cri.go:89] found id: ""
	I1002 07:19:47.494800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:47.494885  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.499610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.503658  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:47.503779  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:47.532490  346554 cri.go:89] found id: ""
	I1002 07:19:47.532517  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.532527  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:47.532534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:47.532599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:47.565084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:47.565122  346554 cri.go:89] found id: ""
	I1002 07:19:47.565131  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:47.565231  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.569404  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:47.569483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:47.597243  346554 cri.go:89] found id: ""
	I1002 07:19:47.597266  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.597275  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:47.597284  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:47.597294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:47.693710  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:47.693748  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:47.771715  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:47.771739  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:47.771752  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.810005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:47.810090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.890792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:47.890824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.977230  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:47.977271  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:48.018612  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:48.018643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:48.105364  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:48.105401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:48.124841  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:48.124870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:48.193027  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:48.193069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:48.239251  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:48.239279  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:50.782662  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:50.794011  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:50.794105  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:50.838191  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:50.838216  346554 cri.go:89] found id: ""
	I1002 07:19:50.838225  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:50.838286  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.842655  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:50.842755  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:50.891807  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:50.891833  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:50.891839  346554 cri.go:89] found id: ""
	I1002 07:19:50.891847  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:50.891964  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.899196  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.904048  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:50.904143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:50.939603  346554 cri.go:89] found id: ""
	I1002 07:19:50.939626  346554 logs.go:282] 0 containers: []
	W1002 07:19:50.939635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:50.939641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:50.939735  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:50.971030  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:50.971053  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:50.971059  346554 cri.go:89] found id: ""
	I1002 07:19:50.971067  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:50.971179  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.975612  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.980140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:50.980242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:51.025029  346554 cri.go:89] found id: ""
	I1002 07:19:51.025055  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.025064  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:51.025071  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:51.025186  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:51.058743  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.058764  346554 cri.go:89] found id: ""
	I1002 07:19:51.058772  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:51.058862  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:51.064931  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:51.065035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:51.101431  346554 cri.go:89] found id: ""
	I1002 07:19:51.101462  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.101486  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:51.101498  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:51.101531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:51.126461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:51.126494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:51.217174  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:51.217200  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:51.217216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:51.279369  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:51.279449  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:51.337216  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:51.337253  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:51.425630  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:51.425669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:51.528560  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:51.528601  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:51.556690  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:51.556719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:51.600118  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:51.600251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:51.632616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:51.632650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.662904  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:51.662935  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.196274  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:54.207476  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:54.207546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:54.238643  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.238664  346554 cri.go:89] found id: ""
	I1002 07:19:54.238673  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:54.238729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.242382  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:54.242456  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:54.274345  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.274377  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.274383  346554 cri.go:89] found id: ""
	I1002 07:19:54.274390  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:54.274451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.278686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.283146  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:54.283225  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:54.315609  346554 cri.go:89] found id: ""
	I1002 07:19:54.315635  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.315645  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:54.315652  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:54.315718  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:54.343684  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.343709  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.343715  346554 cri.go:89] found id: ""
	I1002 07:19:54.343723  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:54.343789  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.347649  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.351327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:54.351428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:54.380301  346554 cri.go:89] found id: ""
	I1002 07:19:54.380336  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.380346  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:54.380353  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:54.380440  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:54.413081  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:54.413105  346554 cri.go:89] found id: ""
	I1002 07:19:54.413114  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:54.413172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.417107  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:54.417181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:54.450903  346554 cri.go:89] found id: ""
	I1002 07:19:54.450930  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.450947  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:54.450957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:54.450972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:54.551509  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:54.551550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:54.567991  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:54.568018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:54.641344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:54.641366  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:54.641403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.677557  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:54.677592  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.742382  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:54.742417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:54.830648  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:54.830681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.866699  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:54.866727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.893138  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:54.893166  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.942885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:54.942920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.977070  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:54.977098  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.528866  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:57.540731  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:57.540803  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:57.571921  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:57.571945  346554 cri.go:89] found id: ""
	I1002 07:19:57.571954  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:57.572028  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.575942  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:57.576018  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:57.604185  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.604219  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.604224  346554 cri.go:89] found id: ""
	I1002 07:19:57.604232  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:57.604326  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.608202  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.611833  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:57.611912  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:57.640401  346554 cri.go:89] found id: ""
	I1002 07:19:57.640431  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.640440  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:57.640447  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:57.640519  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:57.671538  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:57.671560  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:57.671565  346554 cri.go:89] found id: ""
	I1002 07:19:57.671572  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:57.671629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.675430  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.679760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:57.679837  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:57.707483  346554 cri.go:89] found id: ""
	I1002 07:19:57.707511  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.707521  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:57.707527  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:57.707592  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:57.736308  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.736330  346554 cri.go:89] found id: ""
	I1002 07:19:57.736338  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:57.736407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.740334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:57.740505  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:57.771488  346554 cri.go:89] found id: ""
	I1002 07:19:57.771558  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.771575  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:57.771585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:57.771599  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.824974  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:57.825013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.862787  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:57.862825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.891348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:57.891374  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:57.923682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:57.923711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:57.996115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:57.996139  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:57.996155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:58.033126  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:58.033198  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:58.106377  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:58.106415  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:58.139224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:58.139252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:58.226478  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:58.226525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:58.331297  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:58.331338  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:00.847448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:00.859829  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:00.859905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:00.887965  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:00.888039  346554 cri.go:89] found id: ""
	I1002 07:20:00.888063  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:00.888133  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.892548  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:00.892623  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:00.922567  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:00.922586  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:00.922591  346554 cri.go:89] found id: ""
	I1002 07:20:00.922598  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:00.922653  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.926435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.930250  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:00.930339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:00.959728  346554 cri.go:89] found id: ""
	I1002 07:20:00.959759  346554 logs.go:282] 0 containers: []
	W1002 07:20:00.959769  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:00.959777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:00.959861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:00.988254  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:00.988317  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:00.988338  346554 cri.go:89] found id: ""
	I1002 07:20:00.988365  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:00.988466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.993016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.996699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:00.996818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:01.024791  346554 cri.go:89] found id: ""
	I1002 07:20:01.024815  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.024823  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:01.024849  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:01.024931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:01.056703  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.056728  346554 cri.go:89] found id: ""
	I1002 07:20:01.056737  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:01.056820  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:01.061200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:01.061302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:01.092652  346554 cri.go:89] found id: ""
	I1002 07:20:01.092680  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.092690  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:01.092701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:01.092715  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.121048  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:01.121084  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:01.227967  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:01.228007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:01.246697  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:01.246728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:01.299528  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:01.299606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:01.329789  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:01.329875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:01.412310  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:01.412348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:01.449621  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:01.449651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:01.528807  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:01.528832  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:01.528848  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:01.557543  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:01.557575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:01.606902  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:01.607007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.163648  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:04.175704  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:04.175798  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:04.202895  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.202920  346554 cri.go:89] found id: ""
	I1002 07:20:04.202929  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:04.202988  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.206773  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:04.206847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:04.237461  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.237484  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.237490  346554 cri.go:89] found id: ""
	I1002 07:20:04.237497  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:04.237551  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.241192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.244646  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:04.244721  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:04.271145  346554 cri.go:89] found id: ""
	I1002 07:20:04.271172  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.271181  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:04.271188  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:04.271290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:04.301758  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.301787  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.301792  346554 cri.go:89] found id: ""
	I1002 07:20:04.301800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:04.301858  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.305658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.309360  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:04.309437  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:04.339291  346554 cri.go:89] found id: ""
	I1002 07:20:04.339317  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.339339  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:04.339347  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:04.339417  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:04.366771  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.366841  346554 cri.go:89] found id: ""
	I1002 07:20:04.366866  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:04.366961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.371032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:04.371213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:04.396810  346554 cri.go:89] found id: ""
	I1002 07:20:04.396889  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.396905  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:04.396916  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:04.396933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:04.414258  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:04.414291  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.478315  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:04.478395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.536808  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:04.536847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.564995  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:04.565025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.592902  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:04.592931  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:04.671813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:04.671849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:04.710652  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:04.710684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:04.820627  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:04.820664  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:04.897187  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:04.897212  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:04.897229  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.936329  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:04.936358  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.496901  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:07.514473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:07.514547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:07.540993  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.541017  346554 cri.go:89] found id: ""
	I1002 07:20:07.541025  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:07.541109  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.545015  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:07.545090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:07.572646  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.572670  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.572675  346554 cri.go:89] found id: ""
	I1002 07:20:07.572683  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:07.572763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.576707  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.580612  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:07.580684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:07.606885  346554 cri.go:89] found id: ""
	I1002 07:20:07.606909  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.606917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:07.606923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:07.606980  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:07.633971  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.634051  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.634072  346554 cri.go:89] found id: ""
	I1002 07:20:07.634115  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:07.634212  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.638009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.641489  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:07.641558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:07.669226  346554 cri.go:89] found id: ""
	I1002 07:20:07.669252  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.669262  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:07.669269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:07.669328  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:07.697084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.697110  346554 cri.go:89] found id: ""
	I1002 07:20:07.697119  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:07.697218  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.702023  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:07.702125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:07.729244  346554 cri.go:89] found id: ""
	I1002 07:20:07.729270  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.729279  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:07.729289  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:07.729305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.774187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:07.774226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.840113  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:07.840153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.873716  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:07.873757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:07.891261  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:07.891289  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.916233  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:07.916263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.952299  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:07.952332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.986719  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:07.986746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:08.071303  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:08.071345  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:08.108002  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:08.108028  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:08.210536  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:08.210576  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:08.294093  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:10.795316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:10.809081  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:10.809162  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:10.842834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:10.842857  346554 cri.go:89] found id: ""
	I1002 07:20:10.842866  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:10.842923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.846661  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:10.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:10.885119  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:10.885154  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:10.885160  346554 cri.go:89] found id: ""
	I1002 07:20:10.885167  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:10.885227  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.888993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.892673  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:10.892745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:10.919884  346554 cri.go:89] found id: ""
	I1002 07:20:10.919910  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.919920  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:10.919926  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:10.919986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:10.948791  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:10.948813  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:10.948818  346554 cri.go:89] found id: ""
	I1002 07:20:10.948832  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:10.948888  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.952760  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.956362  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:10.956465  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:10.984495  346554 cri.go:89] found id: ""
	I1002 07:20:10.984518  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.984528  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:10.984535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:10.984636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:11.017757  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.017840  346554 cri.go:89] found id: ""
	I1002 07:20:11.017854  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:11.017923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:11.022016  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:11.022121  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:11.049783  346554 cri.go:89] found id: ""
	I1002 07:20:11.049807  346554 logs.go:282] 0 containers: []
	W1002 07:20:11.049816  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:11.049826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:11.049858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:11.130029  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:11.130050  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:11.130065  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:11.158585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:11.158617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:11.206663  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:11.206698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:11.251780  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:11.251812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:11.320488  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:11.320524  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:11.401025  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:11.401061  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:11.509831  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:11.509925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:11.528908  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:11.528984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:11.560309  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:11.560340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.587476  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:11.587505  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.117921  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:14.129181  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:14.129256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:14.155142  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.155165  346554 cri.go:89] found id: ""
	I1002 07:20:14.155174  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:14.155234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.158996  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:14.159072  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:14.187368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.187439  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.187451  346554 cri.go:89] found id: ""
	I1002 07:20:14.187459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:14.187516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.191550  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.195394  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:14.195489  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:14.221702  346554 cri.go:89] found id: ""
	I1002 07:20:14.221731  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.221741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:14.221748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:14.221805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:14.250745  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.250768  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:14.250774  346554 cri.go:89] found id: ""
	I1002 07:20:14.250781  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:14.250840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.254464  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.257656  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:14.257732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:14.287657  346554 cri.go:89] found id: ""
	I1002 07:20:14.287684  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.287693  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:14.287699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:14.287763  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:14.317647  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.317670  346554 cri.go:89] found id: ""
	I1002 07:20:14.317680  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:14.317738  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.321550  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:14.321664  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:14.347420  346554 cri.go:89] found id: ""
	I1002 07:20:14.347445  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.347455  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:14.347465  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:14.347476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:14.428069  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:14.428106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.482408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:14.482447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.534003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:14.534036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.587616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:14.587652  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.615153  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:14.615189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.649482  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:14.649517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:14.745400  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:14.745440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:14.765273  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:14.765307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:14.841087  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:14.841109  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:14.841123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.867206  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:14.867236  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.396729  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:17.407809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:17.407882  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:17.435626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.435649  346554 cri.go:89] found id: ""
	I1002 07:20:17.435667  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:17.435729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.440093  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:17.440173  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:17.481710  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.481732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.481738  346554 cri.go:89] found id: ""
	I1002 07:20:17.481745  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:17.481808  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.488857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.492676  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:17.492748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:17.535179  346554 cri.go:89] found id: ""
	I1002 07:20:17.535251  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.535277  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:17.535317  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:17.535404  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:17.567305  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:17.567330  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.567335  346554 cri.go:89] found id: ""
	I1002 07:20:17.567343  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:17.567405  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.572504  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.576436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:17.576540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:17.604459  346554 cri.go:89] found id: ""
	I1002 07:20:17.604489  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.604498  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:17.604504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:17.604568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:17.632230  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.632254  346554 cri.go:89] found id: ""
	I1002 07:20:17.632263  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:17.632352  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.636309  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:17.636416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:17.664031  346554 cri.go:89] found id: ""
	I1002 07:20:17.664058  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.664068  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:17.664078  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:17.664090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.690836  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:17.690911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.720348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:17.720376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:17.752215  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:17.752295  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:17.855749  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:17.855789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:17.872293  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:17.872320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.923506  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:17.923540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.971187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:17.971220  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:18.041592  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:18.041630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:18.085650  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:18.085682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:18.171333  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:18.171372  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:18.244409  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:20.746282  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:20.757663  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:20.757743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:20.787729  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:20.787751  346554 cri.go:89] found id: ""
	I1002 07:20:20.787760  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:20.787845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.792330  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:20.792424  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:20.829800  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:20.829824  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:20.829830  346554 cri.go:89] found id: ""
	I1002 07:20:20.829838  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:20.829899  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.833952  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.837642  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:20.837723  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:20.867702  346554 cri.go:89] found id: ""
	I1002 07:20:20.867725  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.867734  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:20.867740  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:20.867830  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:20.908994  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:20.909016  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:20.909022  346554 cri.go:89] found id: ""
	I1002 07:20:20.909029  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:20.909085  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.913045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.916567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:20.916643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:20.947545  346554 cri.go:89] found id: ""
	I1002 07:20:20.947571  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.947581  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:20.947588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:20.947651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:20.980904  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:20.980984  346554 cri.go:89] found id: ""
	I1002 07:20:20.980999  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:20.981082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.984909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:20.984982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:21.020855  346554 cri.go:89] found id: ""
	I1002 07:20:21.020878  346554 logs.go:282] 0 containers: []
	W1002 07:20:21.020887  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:21.020896  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:21.020907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:21.117602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:21.117638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:21.192022  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:21.192043  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:21.192057  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:21.276022  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:21.276060  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:21.308782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:21.308822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:21.396093  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:21.396132  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:21.438867  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:21.438900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:21.463876  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:21.463906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:21.500802  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:21.500843  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:21.550471  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:21.550508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:21.590310  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:21.590349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.119676  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:24.131693  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:24.131783  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:24.163845  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.163870  346554 cri.go:89] found id: ""
	I1002 07:20:24.163879  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:24.163939  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.167667  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:24.167742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:24.195635  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.195658  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:24.195664  346554 cri.go:89] found id: ""
	I1002 07:20:24.195672  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:24.195731  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.199786  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.204099  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:24.204199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:24.233690  346554 cri.go:89] found id: ""
	I1002 07:20:24.233716  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.233726  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:24.233733  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:24.233790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:24.262505  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.262565  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.262586  346554 cri.go:89] found id: ""
	I1002 07:20:24.262614  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:24.262691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.266650  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.270417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:24.270511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:24.297687  346554 cri.go:89] found id: ""
	I1002 07:20:24.297713  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.297723  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:24.297729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:24.297790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:24.325175  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.325197  346554 cri.go:89] found id: ""
	I1002 07:20:24.325205  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:24.325284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.329310  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:24.329399  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:24.358432  346554 cri.go:89] found id: ""
	I1002 07:20:24.358458  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.358468  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:24.358477  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:24.358489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.418997  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:24.419034  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.449127  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:24.449155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:24.545814  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:24.545853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:24.561748  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:24.561777  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:24.632202  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:24.632226  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:24.632239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.662637  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:24.662668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:24.740789  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:24.740830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:24.773325  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:24.773357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.807399  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:24.807428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.853933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:24.853972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.396082  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:27.406955  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:27.407027  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:27.435147  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.435171  346554 cri.go:89] found id: ""
	I1002 07:20:27.435180  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:27.435238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.440669  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:27.440745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:27.467109  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.467176  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.467196  346554 cri.go:89] found id: ""
	I1002 07:20:27.467205  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:27.467275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.471217  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.474815  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:27.474888  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:27.503111  346554 cri.go:89] found id: ""
	I1002 07:20:27.503136  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.503145  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:27.503152  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:27.503222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:27.540213  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.540253  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.540260  346554 cri.go:89] found id: ""
	I1002 07:20:27.540276  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:27.540359  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.544590  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.548529  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:27.548605  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:27.577677  346554 cri.go:89] found id: ""
	I1002 07:20:27.577746  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.577772  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:27.577798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:27.577892  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:27.607310  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:27.607329  346554 cri.go:89] found id: ""
	I1002 07:20:27.607337  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:27.607393  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.611619  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:27.611690  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:27.647844  346554 cri.go:89] found id: ""
	I1002 07:20:27.647872  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.647882  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:27.647892  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:27.647905  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:27.723377  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:27.723400  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:27.723419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.750902  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:27.750932  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.804228  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:27.804267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.866989  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:27.867068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.895361  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:27.895393  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:28.004869  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:28.004912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:28.030605  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:28.030637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:28.090494  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:28.090531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:28.120915  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:28.120953  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:28.213702  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:28.213740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:30.746147  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:30.758010  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:30.758090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:30.789909  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:30.789936  346554 cri.go:89] found id: ""
	I1002 07:20:30.789945  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:30.790004  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.794321  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:30.794407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:30.823421  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:30.823445  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:30.823451  346554 cri.go:89] found id: ""
	I1002 07:20:30.823459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:30.823520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.827486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.831334  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:30.831416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:30.857968  346554 cri.go:89] found id: ""
	I1002 07:20:30.857996  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.858005  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:30.858012  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:30.858073  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:30.885972  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:30.885997  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:30.886002  346554 cri.go:89] found id: ""
	I1002 07:20:30.886010  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:30.886074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.891710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.897102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:30.897174  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:30.928917  346554 cri.go:89] found id: ""
	I1002 07:20:30.928944  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.928953  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:30.928960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:30.929079  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:30.957428  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:30.957456  346554 cri.go:89] found id: ""
	I1002 07:20:30.957465  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:30.957524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.961555  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:30.961638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:30.991607  346554 cri.go:89] found id: ""
	I1002 07:20:30.991644  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.991654  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:30.991664  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:30.991682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:31.034696  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:31.034732  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:31.095475  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:31.095521  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:31.124509  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:31.124543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:31.164950  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:31.164982  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:31.242438  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:31.242461  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:31.242475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:31.288791  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:31.288829  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:31.324555  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:31.324590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:31.358683  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:31.358775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:31.442957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:31.443002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:31.546184  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:31.546226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:34.062520  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:34.074346  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:34.074429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:34.104094  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.104116  346554 cri.go:89] found id: ""
	I1002 07:20:34.104124  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:34.104184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.108168  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:34.108242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:34.134780  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.134803  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.134808  346554 cri.go:89] found id: ""
	I1002 07:20:34.134816  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:34.134873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.140158  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.144631  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:34.144709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:34.171174  346554 cri.go:89] found id: ""
	I1002 07:20:34.171197  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.171209  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:34.171216  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:34.171279  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:34.201197  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.201265  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.201279  346554 cri.go:89] found id: ""
	I1002 07:20:34.201289  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:34.201358  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.205487  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.209274  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:34.209371  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:34.236797  346554 cri.go:89] found id: ""
	I1002 07:20:34.236823  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.236832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:34.236839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:34.236899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:34.268130  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.268153  346554 cri.go:89] found id: ""
	I1002 07:20:34.268163  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:34.268221  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.272288  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:34.272494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:34.303012  346554 cri.go:89] found id: ""
	I1002 07:20:34.303036  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.303046  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:34.303057  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:34.303069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.330987  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:34.331016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:34.409294  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:34.409332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:34.444890  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:34.444921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:34.529848  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:34.529873  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:34.529887  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.576746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:34.576783  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.617959  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:34.617994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.680077  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:34.680116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.709769  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:34.709801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.741411  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:34.741440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:34.841059  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:34.841096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.359292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:37.370946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:37.371032  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:37.399137  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.399162  346554 cri.go:89] found id: ""
	I1002 07:20:37.399171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:37.399230  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.403338  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:37.403412  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:37.430753  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.430777  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.430782  346554 cri.go:89] found id: ""
	I1002 07:20:37.430790  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:37.430846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.434756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.440208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:37.440282  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:37.466624  346554 cri.go:89] found id: ""
	I1002 07:20:37.466708  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.466741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:37.466763  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:37.466859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:37.494022  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.494043  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:37.494049  346554 cri.go:89] found id: ""
	I1002 07:20:37.494057  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:37.494137  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.498098  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.502412  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:37.502500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:37.535920  346554 cri.go:89] found id: ""
	I1002 07:20:37.535947  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.535956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:37.535963  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:37.536022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:37.562970  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.562994  346554 cri.go:89] found id: ""
	I1002 07:20:37.563004  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:37.563062  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.567000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:37.567077  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:37.595796  346554 cri.go:89] found id: ""
	I1002 07:20:37.595823  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.595832  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:37.595842  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:37.595875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.622318  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:37.622347  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:37.698567  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:37.698606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:37.730294  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:37.730323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.746780  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:37.746819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.774051  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:37.774082  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.842657  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:37.842692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.879058  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:37.879101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.958213  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:37.958255  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:38.066523  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:38.066564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:38.140589  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:38.140614  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:38.140628  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.668101  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:40.680533  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:40.680613  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:40.709182  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.709201  346554 cri.go:89] found id: ""
	I1002 07:20:40.709217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:40.709275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.714063  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:40.714131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:40.741940  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:40.741960  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:40.741965  346554 cri.go:89] found id: ""
	I1002 07:20:40.741972  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:40.742030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.746103  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.749819  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:40.749890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:40.779806  346554 cri.go:89] found id: ""
	I1002 07:20:40.779869  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.779893  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:40.779918  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:40.779999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:40.818846  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:40.818910  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.818930  346554 cri.go:89] found id: ""
	I1002 07:20:40.818956  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:40.819034  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.825049  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.829111  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:40.829255  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:40.857000  346554 cri.go:89] found id: ""
	I1002 07:20:40.857070  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.857101  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:40.857116  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:40.857204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:40.890997  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:40.891021  346554 cri.go:89] found id: ""
	I1002 07:20:40.891030  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:40.891120  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.902062  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:40.902188  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:40.931155  346554 cri.go:89] found id: ""
	I1002 07:20:40.931192  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.931201  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:40.931258  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:40.931282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.968238  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:40.968267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:41.004537  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:41.004577  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:41.077656  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:41.077693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:41.110709  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:41.110738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:41.146808  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:41.146839  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:41.218315  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:41.218395  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:41.218476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:41.270106  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:41.270141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:41.300977  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:41.301007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:41.385349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:41.385387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:41.485614  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:41.485658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.002362  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:44.017480  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:44.017558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:44.055626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.055653  346554 cri.go:89] found id: ""
	I1002 07:20:44.055662  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:44.055736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.059917  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:44.059997  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:44.097033  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:44.097067  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.097072  346554 cri.go:89] found id: ""
	I1002 07:20:44.097079  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:44.097147  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.101257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.105790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:44.105890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:44.134184  346554 cri.go:89] found id: ""
	I1002 07:20:44.134213  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.134222  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:44.134229  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:44.134316  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:44.172910  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.172972  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.172992  346554 cri.go:89] found id: ""
	I1002 07:20:44.173019  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:44.173087  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.177020  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.181101  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:44.181189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:44.210050  346554 cri.go:89] found id: ""
	I1002 07:20:44.210072  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.210081  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:44.210088  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:44.210148  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:44.236942  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.236966  346554 cri.go:89] found id: ""
	I1002 07:20:44.236975  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:44.237032  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.240886  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:44.240968  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:44.267437  346554 cri.go:89] found id: ""
	I1002 07:20:44.267471  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.267482  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:44.267498  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:44.267522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.311617  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:44.311650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.371464  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:44.371502  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.401657  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:44.401685  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.429428  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:44.429458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.457332  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:44.457370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:44.542400  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:44.542441  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:44.576729  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:44.576808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:44.671950  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:44.671991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.688074  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:44.688102  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:44.772308  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:44.772331  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:44.772344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.326275  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:47.337461  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:47.337588  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:47.370813  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.370885  346554 cri.go:89] found id: ""
	I1002 07:20:47.370909  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:47.370985  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.375983  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:47.376102  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:47.408952  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.409021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.409046  346554 cri.go:89] found id: ""
	I1002 07:20:47.409075  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:47.409142  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.412894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.416604  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:47.416678  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:47.443724  346554 cri.go:89] found id: ""
	I1002 07:20:47.443746  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.443755  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:47.443761  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:47.443825  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:47.472814  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.472835  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.472840  346554 cri.go:89] found id: ""
	I1002 07:20:47.472848  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:47.472910  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.476853  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.481052  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:47.481125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:47.527292  346554 cri.go:89] found id: ""
	I1002 07:20:47.527316  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.527325  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:47.527331  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:47.527396  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:47.557465  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.557493  346554 cri.go:89] found id: ""
	I1002 07:20:47.557502  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:47.557573  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.561605  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:47.561776  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:47.592217  346554 cri.go:89] found id: ""
	I1002 07:20:47.592251  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.592261  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:47.592270  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:47.592282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:47.609667  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:47.609697  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.670961  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:47.670999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.701512  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:47.701543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.730463  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:47.730493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:47.813379  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:47.813403  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:47.813417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.839632  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:47.839663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.890767  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:47.890807  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.931484  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:47.931519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:48.013592  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:48.013683  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:48.048341  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:48.048371  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:50.660679  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:50.672098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:50.672208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:50.698977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.699002  346554 cri.go:89] found id: ""
	I1002 07:20:50.699012  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:50.699155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.703120  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:50.703197  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:50.731004  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:50.731030  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.731035  346554 cri.go:89] found id: ""
	I1002 07:20:50.731043  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:50.731134  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.735170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.739036  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:50.739228  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:50.765233  346554 cri.go:89] found id: ""
	I1002 07:20:50.765257  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.765267  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:50.765276  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:50.765337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:50.798825  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:50.798846  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:50.798851  346554 cri.go:89] found id: ""
	I1002 07:20:50.798858  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:50.798922  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.803023  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.806604  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:50.806684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:50.834561  346554 cri.go:89] found id: ""
	I1002 07:20:50.834595  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.834605  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:50.834612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:50.834685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:50.862616  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:50.862640  346554 cri.go:89] found id: ""
	I1002 07:20:50.862649  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:50.862719  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.866512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:50.866591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:50.894801  346554 cri.go:89] found id: ""
	I1002 07:20:50.894874  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.894898  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:50.894927  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:50.894970  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.922014  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:50.922093  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.963158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:50.963238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:51.041253  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:51.041298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:51.078068  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:51.078373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:51.109345  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:51.109379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:51.143553  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:51.143586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:51.160251  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:51.160287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:51.232331  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:51.232357  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:51.232370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:51.284859  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:51.284891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:51.366726  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:51.366764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:53.965349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:53.977241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:53.977365  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:54.007342  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.007370  346554 cri.go:89] found id: ""
	I1002 07:20:54.007379  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:54.007452  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.014154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:54.014243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:54.042738  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.042761  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.042767  346554 cri.go:89] found id: ""
	I1002 07:20:54.042787  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:54.042849  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.047324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.052426  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:54.052514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:54.092137  346554 cri.go:89] found id: ""
	I1002 07:20:54.092162  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.092171  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:54.092177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:54.092245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:54.123873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.123895  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.123900  346554 cri.go:89] found id: ""
	I1002 07:20:54.123908  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:54.123966  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.128307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.132643  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:54.132764  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:54.167072  346554 cri.go:89] found id: ""
	I1002 07:20:54.167173  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.167197  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:54.167223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:54.167317  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:54.201096  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.201124  346554 cri.go:89] found id: ""
	I1002 07:20:54.201133  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:54.201192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.205200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:54.205319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:54.232346  346554 cri.go:89] found id: ""
	I1002 07:20:54.232375  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.232384  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:54.232394  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:54.232424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:54.307053  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:54.307076  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:54.307120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.339765  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:54.339797  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.389419  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:54.389463  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.427898  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:54.427934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.459945  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:54.459979  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.495013  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:54.495049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:54.593488  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:54.593523  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:54.699166  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:54.699248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:54.715185  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:54.715217  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.790047  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:54.790081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.332703  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:57.343440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:57.343508  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:57.371159  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:57.371224  346554 cri.go:89] found id: ""
	I1002 07:20:57.371248  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:57.371325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.376379  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:57.376455  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:57.403394  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:57.403417  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:57.403423  346554 cri.go:89] found id: ""
	I1002 07:20:57.403431  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:57.403486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.407238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.410942  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:57.411033  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:57.438995  346554 cri.go:89] found id: ""
	I1002 07:20:57.439020  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.439029  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:57.439036  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:57.439133  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:57.471614  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.471639  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:57.471644  346554 cri.go:89] found id: ""
	I1002 07:20:57.471656  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:57.471714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.475670  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.479817  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:57.479927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:57.514129  346554 cri.go:89] found id: ""
	I1002 07:20:57.514152  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.514160  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:57.514166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:57.514229  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:57.540930  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.540954  346554 cri.go:89] found id: ""
	I1002 07:20:57.540963  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:57.541019  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.545166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:57.545246  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:57.580607  346554 cri.go:89] found id: ""
	I1002 07:20:57.580633  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.580643  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:57.580653  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:57.580682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:57.662349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:57.662389  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:57.761863  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:57.761900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.830325  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:57.830366  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.856569  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:57.856598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.888135  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:57.888164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:57.906242  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:57.906270  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:57.976993  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:57.977018  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:57.977033  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:58.011287  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:58.011323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:58.063746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:58.063782  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:58.114504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:58.114539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.655161  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:00.666760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:00.666847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:00.699194  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:00.699218  346554 cri.go:89] found id: ""
	I1002 07:21:00.699227  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:00.699283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.703475  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:00.703551  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:00.730837  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:00.730862  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:00.730867  346554 cri.go:89] found id: ""
	I1002 07:21:00.730874  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:00.730933  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.734900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.738704  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:00.738777  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:00.765809  346554 cri.go:89] found id: ""
	I1002 07:21:00.765832  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.765841  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:00.765847  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:00.765903  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:00.806888  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:00.806911  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.806916  346554 cri.go:89] found id: ""
	I1002 07:21:00.806924  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:00.806982  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.810980  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.815454  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:00.815527  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:00.843377  346554 cri.go:89] found id: ""
	I1002 07:21:00.843403  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.843413  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:00.843419  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:00.843480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:00.870064  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:00.870084  346554 cri.go:89] found id: ""
	I1002 07:21:00.870094  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:00.870150  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.874067  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:00.874142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:00.912375  346554 cri.go:89] found id: ""
	I1002 07:21:00.912400  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.912409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:00.912419  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:00.912437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:01.010660  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:01.010703  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:01.027564  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:01.027589  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:01.108980  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:01.109003  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:01.109017  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:01.140899  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:01.140925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:01.201677  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:01.201719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:01.249485  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:01.249516  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:01.310648  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:01.310682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:01.339591  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:01.339668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:01.368293  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:01.368363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:01.451526  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:01.451565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:03.985004  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:03.995665  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:03.995732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:04.038756  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.038786  346554 cri.go:89] found id: ""
	I1002 07:21:04.038796  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:04.038863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.042734  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:04.042813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:04.080960  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.080984  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.080990  346554 cri.go:89] found id: ""
	I1002 07:21:04.080998  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:04.081055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.085045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.088904  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:04.088984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:04.116470  346554 cri.go:89] found id: ""
	I1002 07:21:04.116495  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.116504  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:04.116511  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:04.116568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:04.143301  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.143324  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:04.143330  346554 cri.go:89] found id: ""
	I1002 07:21:04.143336  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:04.143392  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.149220  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.156754  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:04.156875  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:04.186088  346554 cri.go:89] found id: ""
	I1002 07:21:04.186115  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.186125  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:04.186131  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:04.186222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:04.213953  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.213978  346554 cri.go:89] found id: ""
	I1002 07:21:04.213987  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:04.214074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.220236  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:04.220339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:04.249797  346554 cri.go:89] found id: ""
	I1002 07:21:04.249825  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.249834  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:04.249876  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:04.249893  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:04.334427  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:04.334464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:04.365264  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:04.365294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:04.467641  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:04.467693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.495501  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:04.495532  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.553841  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:04.553879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.590884  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:04.590912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.618124  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:04.618157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:04.634781  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:04.634812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:04.712412  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:04.712440  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:04.712458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.772367  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:04.772405  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.313327  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:07.324335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:07.324410  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:07.352343  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.352367  346554 cri.go:89] found id: ""
	I1002 07:21:07.352376  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:07.352456  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.356634  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:07.356705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:07.384754  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.384778  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.384783  346554 cri.go:89] found id: ""
	I1002 07:21:07.384791  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:07.384871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.388840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.392572  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:07.392672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:07.418573  346554 cri.go:89] found id: ""
	I1002 07:21:07.418605  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.418615  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:07.418622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:07.418681  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:07.450415  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:07.450439  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.450445  346554 cri.go:89] found id: ""
	I1002 07:21:07.450466  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:07.450529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.454971  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.459463  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:07.459539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:07.488692  346554 cri.go:89] found id: ""
	I1002 07:21:07.488722  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.488730  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:07.488737  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:07.488799  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:07.520325  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.520350  346554 cri.go:89] found id: ""
	I1002 07:21:07.520359  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:07.520421  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.524256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:07.524330  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:07.549519  346554 cri.go:89] found id: ""
	I1002 07:21:07.549540  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.549548  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:07.549558  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:07.549569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:07.643274  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:07.643315  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:07.716156  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:07.716179  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:07.716195  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.743950  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:07.743980  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:07.830226  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:07.830266  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:07.847230  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:07.847260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.875839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:07.875908  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.937408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:07.937448  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.974391  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:07.974428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:08.044504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:08.044544  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:08.085844  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:08.085875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:10.619391  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:10.631035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:10.631208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:10.664959  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:10.664983  346554 cri.go:89] found id: ""
	I1002 07:21:10.664992  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:10.665070  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.668812  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:10.668884  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:10.695400  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:10.695424  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:10.695430  346554 cri.go:89] found id: ""
	I1002 07:21:10.695438  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:10.695526  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.699317  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.703430  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:10.703524  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:10.728859  346554 cri.go:89] found id: ""
	I1002 07:21:10.728883  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.728892  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:10.728898  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:10.728974  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:10.754882  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:10.754905  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.754911  346554 cri.go:89] found id: ""
	I1002 07:21:10.754918  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:10.754984  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.758686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.762139  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:10.762248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:10.787999  346554 cri.go:89] found id: ""
	I1002 07:21:10.788067  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.788092  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:10.788115  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:10.788204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:10.814729  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:10.814803  346554 cri.go:89] found id: ""
	I1002 07:21:10.814825  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:10.814914  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.818388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:10.818483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:10.845398  346554 cri.go:89] found id: ""
	I1002 07:21:10.845424  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.845433  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:10.845443  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:10.845482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.873199  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:10.873225  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:10.951572  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:10.951609  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:11.051035  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:11.051118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:11.130878  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:11.130909  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:11.130924  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:11.156885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:11.156920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:11.211573  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:11.211615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:11.272703  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:11.272742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:11.301304  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:11.301336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:11.342833  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:11.342861  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:11.360176  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:11.360204  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.902061  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:13.915871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:13.915935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:13.954412  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:13.954439  346554 cri.go:89] found id: ""
	I1002 07:21:13.954448  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:13.954513  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.959571  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:13.959655  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:13.994709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:13.994729  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.994735  346554 cri.go:89] found id: ""
	I1002 07:21:13.994743  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:13.994797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.999427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.003663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:14.003749  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:14.042653  346554 cri.go:89] found id: ""
	I1002 07:21:14.042680  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.042690  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:14.042696  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:14.042757  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:14.087595  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.087615  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.087620  346554 cri.go:89] found id: ""
	I1002 07:21:14.087628  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:14.087688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.092427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.096855  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:14.096920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:14.126816  346554 cri.go:89] found id: ""
	I1002 07:21:14.126843  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.126852  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:14.126858  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:14.126918  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:14.155318  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:14.155339  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.155344  346554 cri.go:89] found id: ""
	I1002 07:21:14.155351  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:14.155407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.159934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.164569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:14.164634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:14.209412  346554 cri.go:89] found id: ""
	I1002 07:21:14.209437  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.209449  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:14.209459  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:14.209471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:14.225995  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:14.226022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:14.263998  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:14.264027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:14.360121  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:14.360159  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:14.407199  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:14.407234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.434782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:14.434814  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:14.521080  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:14.521121  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:14.593104  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:14.593134  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:14.699269  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:14.699308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:14.786512  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:14.786535  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:14.786548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.869065  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:14.869109  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.900362  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:14.900454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.430222  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:17.442136  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:17.442212  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:17.468618  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:17.468642  346554 cri.go:89] found id: ""
	I1002 07:21:17.468664  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:17.468722  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.472407  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:17.472483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:17.500441  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:17.500462  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.500468  346554 cri.go:89] found id: ""
	I1002 07:21:17.500475  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:17.500534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.504574  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.511111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:17.511190  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:17.539180  346554 cri.go:89] found id: ""
	I1002 07:21:17.539208  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.539217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:17.539224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:17.539283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:17.567616  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.567641  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:17.567647  346554 cri.go:89] found id: ""
	I1002 07:21:17.567654  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:17.567710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.571727  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.575519  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:17.575603  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:17.601045  346554 cri.go:89] found id: ""
	I1002 07:21:17.601070  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.601079  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:17.601086  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:17.601143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:17.628358  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.628379  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:17.628384  346554 cri.go:89] found id: ""
	I1002 07:21:17.628391  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:17.628479  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.632534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.636208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:17.636286  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:17.662364  346554 cri.go:89] found id: ""
	I1002 07:21:17.662389  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.662398  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:17.662408  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:17.662419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:17.756609  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:17.756643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:17.772784  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:17.772821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:17.854603  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:17.854625  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:17.854639  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.890480  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:17.890513  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.955720  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:17.955755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.986877  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:17.986906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:18.065618  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:18.065659  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:18.111257  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:18.111287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:18.141121  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:18.141151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:18.202491  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:18.202530  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:18.232094  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:18.232124  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.762758  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:20.773630  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:20.773708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:20.806503  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:20.806533  346554 cri.go:89] found id: ""
	I1002 07:21:20.806542  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:20.806599  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.810265  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:20.810338  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:20.839055  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:20.839105  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:20.839111  346554 cri.go:89] found id: ""
	I1002 07:21:20.839119  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:20.839176  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.843029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.846663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:20.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:20.875148  346554 cri.go:89] found id: ""
	I1002 07:21:20.875173  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.875183  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:20.875190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:20.875249  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:20.907677  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:20.907701  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:20.907707  346554 cri.go:89] found id: ""
	I1002 07:21:20.907715  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:20.907772  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.911686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.915632  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:20.915707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:20.941873  346554 cri.go:89] found id: ""
	I1002 07:21:20.941899  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.941908  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:20.941915  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:20.941975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:20.973490  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:20.973515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.973521  346554 cri.go:89] found id: ""
	I1002 07:21:20.973530  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:20.973585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.977414  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.981138  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:20.981213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:21.013505  346554 cri.go:89] found id: ""
	I1002 07:21:21.013533  346554 logs.go:282] 0 containers: []
	W1002 07:21:21.013543  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:21.013553  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:21.013565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:21.047930  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:21.047959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:21.144461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:21.144498  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:21.218444  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:21.218469  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:21.218482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:21.244979  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:21.245010  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:21.273907  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:21.273940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:21.304310  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:21.304341  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:21.383311  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:21.383390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:21.418944  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:21.418976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:21.437126  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:21.437154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:21.499338  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:21.499373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:21.541388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:21.541424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.103318  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:24.114524  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:24.114645  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:24.142263  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.142286  346554 cri.go:89] found id: ""
	I1002 07:21:24.142295  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:24.142357  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.146924  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:24.146998  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:24.174920  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.174945  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.174950  346554 cri.go:89] found id: ""
	I1002 07:21:24.174958  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:24.175015  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.179961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.183781  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:24.183859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:24.213946  346554 cri.go:89] found id: ""
	I1002 07:21:24.213969  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.213978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:24.213985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:24.214044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:24.240875  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.240898  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.240903  346554 cri.go:89] found id: ""
	I1002 07:21:24.240910  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:24.240967  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.244817  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.248504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:24.248601  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:24.277554  346554 cri.go:89] found id: ""
	I1002 07:21:24.277579  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.277588  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:24.277595  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:24.277675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:24.308411  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.308507  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.308518  346554 cri.go:89] found id: ""
	I1002 07:21:24.308526  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:24.308585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.312514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.316209  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:24.316322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:24.352013  346554 cri.go:89] found id: ""
	I1002 07:21:24.352037  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.352047  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:24.352057  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:24.352070  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.392888  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:24.392926  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.422136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:24.422162  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:24.522148  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:24.522189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:24.559761  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:24.559789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:24.635577  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:24.635658  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:24.635688  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.664008  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:24.664038  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.716205  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:24.716243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.776422  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:24.776465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.812576  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:24.812606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.850011  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:24.850051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:24.957619  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:24.957658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.474346  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:27.486924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:27.486999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:27.527387  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:27.527411  346554 cri.go:89] found id: ""
	I1002 07:21:27.527419  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:27.527481  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.531347  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:27.531425  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:27.557184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:27.557209  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.557216  346554 cri.go:89] found id: ""
	I1002 07:21:27.557226  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:27.557285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.561185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.564887  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:27.564964  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:27.593958  346554 cri.go:89] found id: ""
	I1002 07:21:27.593984  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.593993  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:27.594000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:27.594070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:27.624297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:27.624321  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:27.624325  346554 cri.go:89] found id: ""
	I1002 07:21:27.624332  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:27.624390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.628548  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.632313  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:27.632401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:27.658827  346554 cri.go:89] found id: ""
	I1002 07:21:27.658850  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.658858  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:27.658876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:27.658942  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:27.687346  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.687422  346554 cri.go:89] found id: ""
	I1002 07:21:27.687438  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:27.687516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.691438  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:27.691563  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:27.716933  346554 cri.go:89] found id: ""
	I1002 07:21:27.716959  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.716969  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:27.716979  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:27.717019  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:27.817783  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:27.817831  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.857490  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:27.857525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.885125  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:27.885157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:27.918095  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:27.918133  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.933988  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:27.934018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:28.004686  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:28.004719  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:28.004734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:28.034260  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:28.034287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:28.093230  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:28.093269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:28.164138  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:28.164177  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:28.195157  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:28.195188  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:30.778568  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:30.789765  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:30.789833  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:30.825174  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:30.825194  346554 cri.go:89] found id: ""
	I1002 07:21:30.825202  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:30.825257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.829729  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:30.829796  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:30.856611  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:30.856632  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:30.856637  346554 cri.go:89] found id: ""
	I1002 07:21:30.856644  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:30.856701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.860561  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.864279  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:30.864353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:30.891192  346554 cri.go:89] found id: ""
	I1002 07:21:30.891217  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.891257  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:30.891269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:30.891353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:30.918873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:30.918892  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:30.918897  346554 cri.go:89] found id: ""
	I1002 07:21:30.918904  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:30.918965  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.922949  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.926830  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:30.926928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:30.953030  346554 cri.go:89] found id: ""
	I1002 07:21:30.953059  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.953068  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:30.953074  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:30.953131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:30.980458  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:30.980480  346554 cri.go:89] found id: ""
	I1002 07:21:30.980489  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:30.980547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.984323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:30.984450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:31.026334  346554 cri.go:89] found id: ""
	I1002 07:21:31.026360  346554 logs.go:282] 0 containers: []
	W1002 07:21:31.026370  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:31.026380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:31.026416  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:31.058391  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:31.058420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:31.116004  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:31.116040  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:31.151060  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:31.151099  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:31.231368  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:31.231406  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:31.332798  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:31.332835  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:31.413678  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:31.413705  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:31.413717  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:31.461265  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:31.461299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:31.534946  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:31.534986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:31.562600  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:31.562629  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:31.592876  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:31.592906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.110078  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:34.121201  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:34.121271  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:34.148533  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.148554  346554 cri.go:89] found id: ""
	I1002 07:21:34.148562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:34.148621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.152503  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:34.152585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:34.181027  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.181050  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.181056  346554 cri.go:89] found id: ""
	I1002 07:21:34.181063  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:34.181117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.185002  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.189485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:34.189560  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:34.215599  346554 cri.go:89] found id: ""
	I1002 07:21:34.215625  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.215634  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:34.215641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:34.215699  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:34.241734  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.241763  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.241768  346554 cri.go:89] found id: ""
	I1002 07:21:34.241776  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:34.241832  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.245545  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.248974  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:34.249050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:34.276023  346554 cri.go:89] found id: ""
	I1002 07:21:34.276049  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.276059  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:34.276072  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:34.276132  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:34.303384  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.303407  346554 cri.go:89] found id: ""
	I1002 07:21:34.303415  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:34.303472  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.307469  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:34.307539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:34.340234  346554 cri.go:89] found id: ""
	I1002 07:21:34.340261  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.340271  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:34.340281  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:34.340293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.356522  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:34.356550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.394796  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:34.394825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.443502  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:34.443538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.474055  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:34.474081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:34.555556  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:34.555637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:34.658066  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:34.658101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:34.733631  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:34.733651  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:34.733665  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.784032  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:34.784068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.847736  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:34.847771  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.875075  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:34.875172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:37.408950  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:37.421164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:37.421273  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:37.452410  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.452439  346554 cri.go:89] found id: ""
	I1002 07:21:37.452449  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:37.452505  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.456325  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:37.456445  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:37.486317  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.486340  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:37.486346  346554 cri.go:89] found id: ""
	I1002 07:21:37.486353  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:37.486451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.490342  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.494027  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:37.494104  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:37.527183  346554 cri.go:89] found id: ""
	I1002 07:21:37.527257  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.527281  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:37.527305  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:37.527403  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:37.553164  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:37.553189  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:37.553194  346554 cri.go:89] found id: ""
	I1002 07:21:37.553202  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:37.553263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.557191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.560812  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:37.560909  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:37.592768  346554 cri.go:89] found id: ""
	I1002 07:21:37.592837  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.592861  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:37.592887  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:37.592973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:37.619244  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:37.619275  346554 cri.go:89] found id: ""
	I1002 07:21:37.619285  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:37.619382  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.622994  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:37.623067  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:37.654796  346554 cri.go:89] found id: ""
	I1002 07:21:37.654833  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.654843  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:37.654853  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:37.654864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:37.735865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:37.735903  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:37.829667  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:37.829705  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:37.906371  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:37.906396  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:37.906409  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.931859  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:37.931891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.982107  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:37.982141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:38.026363  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:38.026402  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:38.097347  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:38.097387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:38.129911  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:38.129940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:38.174203  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:38.174233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:38.192324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:38.192356  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.723244  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:40.733967  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:40.734044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:40.761160  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:40.761180  346554 cri.go:89] found id: ""
	I1002 07:21:40.761196  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:40.761257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.764997  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:40.765082  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:40.793331  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:40.793357  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:40.793376  346554 cri.go:89] found id: ""
	I1002 07:21:40.793385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:40.793441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.799890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.803764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:40.803836  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:40.834660  346554 cri.go:89] found id: ""
	I1002 07:21:40.834686  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.834696  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:40.834702  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:40.834765  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:40.866063  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:40.866087  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:40.866093  346554 cri.go:89] found id: ""
	I1002 07:21:40.866103  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:40.866168  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.870407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.873946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:40.874058  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:40.908301  346554 cri.go:89] found id: ""
	I1002 07:21:40.908367  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.908391  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:40.908417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:40.908494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:40.937896  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.937966  346554 cri.go:89] found id: ""
	I1002 07:21:40.937990  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:40.938080  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.941880  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:40.941952  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:40.967147  346554 cri.go:89] found id: ""
	I1002 07:21:40.967174  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.967190  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:40.967226  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:40.967238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:41.061039  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:41.061077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:41.080254  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:41.080282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:41.108521  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:41.108547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:41.162117  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:41.162154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:41.233238  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:41.233276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:41.260363  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:41.260392  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:41.333767  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:41.333840  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:41.333863  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:41.370518  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:41.370556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:41.399620  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:41.399646  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:41.485257  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:41.485299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.031564  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:44.043423  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:44.043501  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:44.077366  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.077391  346554 cri.go:89] found id: ""
	I1002 07:21:44.077400  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:44.077473  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.082216  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:44.082297  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:44.114495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.114564  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.114585  346554 cri.go:89] found id: ""
	I1002 07:21:44.114612  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:44.114701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.118699  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.122876  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:44.122955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:44.161976  346554 cri.go:89] found id: ""
	I1002 07:21:44.162003  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.162015  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:44.162021  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:44.162120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:44.190658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.190682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.190688  346554 cri.go:89] found id: ""
	I1002 07:21:44.190695  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:44.190800  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.194562  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.198424  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:44.198514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:44.224096  346554 cri.go:89] found id: ""
	I1002 07:21:44.224158  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.224181  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:44.224207  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:44.224284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:44.251545  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.251569  346554 cri.go:89] found id: ""
	I1002 07:21:44.251581  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:44.251639  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.255354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:44.255428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:44.282373  346554 cri.go:89] found id: ""
	I1002 07:21:44.282400  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.282409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:44.282419  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:44.282431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.308028  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:44.308062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.363685  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:44.363723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.396318  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:44.396349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.442337  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:44.442370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:44.546740  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:44.546778  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:44.562701  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:44.562734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:44.638865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:44.638901  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:44.638934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.675050  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:44.675117  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.759066  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:44.759108  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.789536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:44.789569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:47.372747  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:47.384470  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:47.384538  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:47.411456  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.411476  346554 cri.go:89] found id: ""
	I1002 07:21:47.411484  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:47.411538  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.415979  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:47.416052  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:47.441980  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.442000  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.442005  346554 cri.go:89] found id: ""
	I1002 07:21:47.442012  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:47.442071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.446178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.449820  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:47.449889  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:47.480516  346554 cri.go:89] found id: ""
	I1002 07:21:47.480597  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.480614  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:47.480622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:47.480700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:47.512233  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.512299  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.512321  346554 cri.go:89] found id: ""
	I1002 07:21:47.512347  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:47.512447  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.517986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.522484  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:47.522599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:47.554391  346554 cri.go:89] found id: ""
	I1002 07:21:47.554459  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.554483  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:47.554509  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:47.554608  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:47.581519  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.581586  346554 cri.go:89] found id: ""
	I1002 07:21:47.581608  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:47.581710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.585885  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:47.585999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:47.615242  346554 cri.go:89] found id: ""
	I1002 07:21:47.615272  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.615281  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:47.615291  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:47.615322  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:47.635364  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:47.635394  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:47.712651  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:47.712678  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:47.712694  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.743506  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:47.743536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.811148  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:47.811227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.870291  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:47.870324  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.910224  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:47.910257  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.939069  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:47.939155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.964969  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:47.965008  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:48.043117  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:48.043158  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:48.088315  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:48.088344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:50.689757  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:50.700824  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:50.700893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:50.728143  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:50.728166  346554 cri.go:89] found id: ""
	I1002 07:21:50.728175  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:50.728244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.732333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:50.732406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:50.757855  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:50.757880  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:50.757886  346554 cri.go:89] found id: ""
	I1002 07:21:50.757905  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:50.757972  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.762029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.765976  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:50.766050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:50.799256  346554 cri.go:89] found id: ""
	I1002 07:21:50.799278  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.799287  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:50.799293  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:50.799360  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:50.831950  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:50.831974  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:50.831981  346554 cri.go:89] found id: ""
	I1002 07:21:50.831988  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:50.832045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.836319  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.840585  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:50.840668  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:50.870390  346554 cri.go:89] found id: ""
	I1002 07:21:50.870416  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.870428  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:50.870436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:50.870502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:50.900076  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:50.900103  346554 cri.go:89] found id: ""
	I1002 07:21:50.900112  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:50.900193  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.904363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:50.904461  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:50.932728  346554 cri.go:89] found id: ""
	I1002 07:21:50.932755  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.932775  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:50.932786  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:50.932798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:51.001280  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:51.001310  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:51.001326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:51.032692  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:51.032721  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:51.086523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:51.086563  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:51.151924  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:51.151959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:51.181936  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:51.181965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:51.209313  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:51.209340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:51.246072  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:51.246103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:51.328956  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:51.328991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:51.362658  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:51.362692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:51.461576  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:51.461615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:53.981504  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:53.992767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:53.992841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:54.027324  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.027347  346554 cri.go:89] found id: ""
	I1002 07:21:54.027356  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:54.027422  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.031946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:54.032021  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:54.059889  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.059911  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.059916  346554 cri.go:89] found id: ""
	I1002 07:21:54.059924  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:54.059983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.064071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.068437  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:54.068516  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:54.100879  346554 cri.go:89] found id: ""
	I1002 07:21:54.100906  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.100917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:54.100923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:54.101019  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:54.127769  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.127792  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.127798  346554 cri.go:89] found id: ""
	I1002 07:21:54.127806  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:54.127871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.131837  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.135428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:54.135507  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:54.163909  346554 cri.go:89] found id: ""
	I1002 07:21:54.163934  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.163943  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:54.163950  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:54.164008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:54.195746  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.195778  346554 cri.go:89] found id: ""
	I1002 07:21:54.195787  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:54.195846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.200638  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:54.200733  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:54.228414  346554 cri.go:89] found id: ""
	I1002 07:21:54.228492  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.228518  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:54.228534  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:54.228548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:54.261854  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:54.261884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:54.337793  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:54.337814  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:54.337828  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.374142  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:54.374176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.444394  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:54.444430  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.487047  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:54.487074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.531639  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:54.531667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:54.639157  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:54.639196  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:54.655755  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:54.655784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.685950  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:54.685978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.753837  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:54.753879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.341138  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:57.351729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:57.351806  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:57.383937  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.383962  346554 cri.go:89] found id: ""
	I1002 07:21:57.383970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:57.384030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.387697  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:57.387774  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:57.413348  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:57.413372  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.413377  346554 cri.go:89] found id: ""
	I1002 07:21:57.413385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:57.413451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.417397  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.420826  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:57.420904  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:57.453888  346554 cri.go:89] found id: ""
	I1002 07:21:57.453913  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.453922  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:57.453928  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:57.453986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:57.483451  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:57.483472  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:57.483476  346554 cri.go:89] found id: ""
	I1002 07:21:57.483483  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:57.483541  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.487407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.490932  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:57.491034  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:57.526291  346554 cri.go:89] found id: ""
	I1002 07:21:57.526318  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.526327  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:57.526334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:57.526391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:57.554217  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.554297  346554 cri.go:89] found id: ""
	I1002 07:21:57.554320  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:57.554415  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.558417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:57.558494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:57.590610  346554 cri.go:89] found id: ""
	I1002 07:21:57.590632  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.590640  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:57.590649  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:57.590662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:57.686336  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:57.686376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.717511  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:57.717543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.754283  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:57.754326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.785227  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:57.785258  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.869305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:57.869342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:57.909139  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:57.909171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:57.926456  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:57.926487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:57.995639  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:57.995664  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:57.995679  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:58.058207  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:58.058248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:58.125241  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:58.125284  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.654876  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:00.665832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:00.665905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:00.693874  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.693939  346554 cri.go:89] found id: ""
	I1002 07:22:00.693962  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:00.694054  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.697859  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:00.697934  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:00.725245  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.725270  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:00.725276  346554 cri.go:89] found id: ""
	I1002 07:22:00.725284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:00.725364  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.729223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.732817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:00.732935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:00.758839  346554 cri.go:89] found id: ""
	I1002 07:22:00.758906  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.758929  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:00.758953  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:00.759039  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:00.799071  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:00.799149  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.799155  346554 cri.go:89] found id: ""
	I1002 07:22:00.799162  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:00.799234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.803167  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.806750  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:00.806845  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:00.839560  346554 cri.go:89] found id: ""
	I1002 07:22:00.839587  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.839596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:00.839602  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:00.839660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:00.870224  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:00.870255  346554 cri.go:89] found id: ""
	I1002 07:22:00.870263  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:00.870336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.874393  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:00.874495  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:00.912075  346554 cri.go:89] found id: ""
	I1002 07:22:00.912105  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.912114  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:00.912124  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:00.912136  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.937824  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:00.937853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.995416  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:00.995451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:01.066170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:01.066205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:01.097565  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:01.097596  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:01.177599  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:01.177641  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:01.279014  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:01.279051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:01.294984  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:01.295013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:01.367956  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:01.368020  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:01.368050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:01.410820  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:01.410865  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:01.438796  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:01.438821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:03.971937  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:03.983881  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:03.983958  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:04.015026  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.015047  346554 cri.go:89] found id: ""
	I1002 07:22:04.015055  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:04.015146  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.019432  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:04.019511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:04.047606  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.047638  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.047644  346554 cri.go:89] found id: ""
	I1002 07:22:04.047651  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:04.047716  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.052312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.055940  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:04.056013  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:04.084749  346554 cri.go:89] found id: ""
	I1002 07:22:04.084774  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.084784  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:04.084791  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:04.084858  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:04.115693  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.115718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:04.115724  346554 cri.go:89] found id: ""
	I1002 07:22:04.115732  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:04.115791  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.119451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.123387  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:04.123509  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:04.160601  346554 cri.go:89] found id: ""
	I1002 07:22:04.160634  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.160643  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:04.160650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:04.160709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:04.186914  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.186975  346554 cri.go:89] found id: ""
	I1002 07:22:04.187000  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:04.187074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.190897  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:04.190972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:04.217225  346554 cri.go:89] found id: ""
	I1002 07:22:04.217292  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.217306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:04.217320  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:04.217332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:04.248848  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:04.248876  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:04.265771  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:04.265801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:04.331344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:04.331380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:04.331395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.358729  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:04.358757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.416966  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:04.417007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.455261  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:04.455298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.483009  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:04.483037  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:04.563547  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:04.563585  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:04.668263  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:04.668301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.744129  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:04.744172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.275239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:07.285854  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:07.285925  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:07.312977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.312997  346554 cri.go:89] found id: ""
	I1002 07:22:07.313005  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:07.313060  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.316845  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:07.316920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:07.346852  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.346874  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.346879  346554 cri.go:89] found id: ""
	I1002 07:22:07.346887  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:07.346943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.350635  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.354162  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:07.354227  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:07.383691  346554 cri.go:89] found id: ""
	I1002 07:22:07.383716  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.383725  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:07.383732  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:07.383790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:07.412740  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.412762  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.412768  346554 cri.go:89] found id: ""
	I1002 07:22:07.412775  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:07.412874  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.416633  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.420294  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:07.420370  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:07.448452  346554 cri.go:89] found id: ""
	I1002 07:22:07.448481  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.448496  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:07.448503  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:07.448573  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:07.478691  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:07.478759  346554 cri.go:89] found id: ""
	I1002 07:22:07.478782  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:07.478877  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.484491  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:07.484566  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:07.526882  346554 cri.go:89] found id: ""
	I1002 07:22:07.526907  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.526916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:07.526926  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:07.526940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:07.543682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:07.543709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:07.622365  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:07.622386  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:07.622401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.688381  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:07.688417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.716317  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:07.716368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:07.765160  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:07.765187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:07.863442  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:07.863480  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.890947  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:07.890975  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.931413  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:07.931445  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.994034  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:07.994116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:08.029432  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:08.029459  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:10.612654  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:10.624226  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:10.624295  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:10.651797  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:10.651820  346554 cri.go:89] found id: ""
	I1002 07:22:10.651829  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:10.651887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.655778  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:10.655861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:10.682781  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:10.682804  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:10.682810  346554 cri.go:89] found id: ""
	I1002 07:22:10.682817  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:10.682873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.686610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.690176  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:10.690248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:10.716340  346554 cri.go:89] found id: ""
	I1002 07:22:10.716365  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.716374  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:10.716380  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:10.716450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:10.744916  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:10.744941  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:10.744947  346554 cri.go:89] found id: ""
	I1002 07:22:10.744954  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:10.745009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.748825  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.752367  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:10.752459  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:10.778426  346554 cri.go:89] found id: ""
	I1002 07:22:10.778491  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.778519  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:10.778545  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:10.778634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:10.816930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:10.816956  346554 cri.go:89] found id: ""
	I1002 07:22:10.816965  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:10.817021  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.820675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:10.820748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:10.848624  346554 cri.go:89] found id: ""
	I1002 07:22:10.848692  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.848716  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:10.848747  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:10.848784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:10.949146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:10.949183  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:10.966424  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:10.966503  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:11.050571  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:11.050590  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:11.050607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:11.096274  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:11.096305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:11.163795  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:11.163833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:11.198136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:11.198167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:11.281776  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:11.281815  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:11.314298  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:11.314329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:11.346046  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:11.346074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:11.401509  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:11.401546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:13.937437  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:13.948853  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:13.948931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:13.978524  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:13.978546  346554 cri.go:89] found id: ""
	I1002 07:22:13.978562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:13.978622  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:13.983904  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:13.984002  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:14.018404  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.018427  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.018432  346554 cri.go:89] found id: ""
	I1002 07:22:14.018441  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:14.018501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.022898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.027485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:14.027580  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:14.067189  346554 cri.go:89] found id: ""
	I1002 07:22:14.067277  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.067293  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:14.067301  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:14.067380  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:14.098843  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:14.098868  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.098874  346554 cri.go:89] found id: ""
	I1002 07:22:14.098882  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:14.098938  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.103497  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.107744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:14.107820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:14.136768  346554 cri.go:89] found id: ""
	I1002 07:22:14.136797  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.136807  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:14.136813  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:14.136880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:14.163984  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.164055  346554 cri.go:89] found id: ""
	I1002 07:22:14.164079  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:14.164165  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.168259  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:14.168337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:14.201762  346554 cri.go:89] found id: ""
	I1002 07:22:14.201789  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.201799  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:14.201809  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:14.201822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.228036  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:14.228067  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:14.305247  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:14.305286  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:14.417180  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:14.417216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:14.434371  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:14.434404  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.494496  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:14.494534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.530240  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:14.530274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:14.565285  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:14.565312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:14.656059  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:14.656082  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:14.656096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:14.684431  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:14.684465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.720953  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:14.720987  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.291251  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:17.303244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:17.303315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:17.330183  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.330208  346554 cri.go:89] found id: ""
	I1002 07:22:17.330217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:17.330281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.334207  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:17.334281  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:17.363238  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.363263  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.363269  346554 cri.go:89] found id: ""
	I1002 07:22:17.363276  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:17.363331  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.367005  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.370719  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:17.370792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:17.397991  346554 cri.go:89] found id: ""
	I1002 07:22:17.398016  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.398026  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:17.398032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:17.398092  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:17.431537  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.431562  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.431568  346554 cri.go:89] found id: ""
	I1002 07:22:17.431575  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:17.431631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.435774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.439628  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:17.439701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:17.470573  346554 cri.go:89] found id: ""
	I1002 07:22:17.470598  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.470614  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:17.470621  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:17.470689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:17.496787  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:17.496813  346554 cri.go:89] found id: ""
	I1002 07:22:17.496822  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:17.496879  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.500676  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:17.500809  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:17.528111  346554 cri.go:89] found id: ""
	I1002 07:22:17.528136  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.528145  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:17.528155  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:17.528167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:17.629228  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:17.629269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:17.719781  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:17.719804  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:17.719818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.791077  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:17.791176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.835873  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:17.835907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.865669  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:17.865698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:17.947809  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:17.947851  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:17.966021  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:17.966054  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.993388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:17.993419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:18.067826  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:18.067915  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:18.098854  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:18.098928  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:20.640412  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:20.654177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:20.654280  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:20.689110  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:20.689138  346554 cri.go:89] found id: ""
	I1002 07:22:20.689146  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:20.689210  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.692968  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:20.693043  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:20.726246  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.726271  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:20.726276  346554 cri.go:89] found id: ""
	I1002 07:22:20.726284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:20.726340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.730329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.734406  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:20.734503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:20.762306  346554 cri.go:89] found id: ""
	I1002 07:22:20.762332  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.762341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:20.762348  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:20.762406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:20.801345  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:20.801370  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:20.801375  346554 cri.go:89] found id: ""
	I1002 07:22:20.801383  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:20.801461  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.805572  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.809363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:20.809439  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:20.839370  346554 cri.go:89] found id: ""
	I1002 07:22:20.839396  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.839405  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:20.839411  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:20.839487  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:20.866883  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:20.866908  346554 cri.go:89] found id: ""
	I1002 07:22:20.866918  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:20.866994  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.871482  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:20.871602  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:20.915272  346554 cri.go:89] found id: ""
	I1002 07:22:20.915297  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.915306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:20.915334  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:20.915354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.969984  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:20.970023  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:21.008389  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:21.008426  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:21.097527  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:21.097564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:21.131052  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:21.131112  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:21.250056  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:21.250095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:21.266497  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:21.266528  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:21.336488  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:21.336517  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:21.336534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:21.365447  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:21.365477  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:21.432439  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:21.432517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:21.464158  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:21.464186  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:23.993684  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:24.012128  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:24.012344  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:24.041820  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.041844  346554 cri.go:89] found id: ""
	I1002 07:22:24.041853  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:24.041913  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.045939  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:24.046012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:24.080951  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.080971  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.080977  346554 cri.go:89] found id: ""
	I1002 07:22:24.080984  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:24.081042  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.086379  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.090878  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:24.090956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:24.118754  346554 cri.go:89] found id: ""
	I1002 07:22:24.118793  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.118803  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:24.118809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:24.118876  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:24.162937  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.162960  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.162967  346554 cri.go:89] found id: ""
	I1002 07:22:24.162975  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:24.163041  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.167416  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.171521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:24.171612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:24.198740  346554 cri.go:89] found id: ""
	I1002 07:22:24.198764  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.198774  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:24.198780  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:24.198849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:24.226586  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.226607  346554 cri.go:89] found id: ""
	I1002 07:22:24.226616  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:24.226676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.230625  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:24.230701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:24.258053  346554 cri.go:89] found id: ""
	I1002 07:22:24.258089  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.258100  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:24.258110  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:24.258122  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:24.357393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:24.357431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:24.375359  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:24.375390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.444675  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:24.444714  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.484227  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:24.484262  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.512674  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:24.512707  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:24.597691  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:24.597712  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:24.597728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.628466  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:24.628492  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.706367  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:24.706408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.737446  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:24.737475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:24.822997  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:24.823036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:27.355482  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:27.366566  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:27.366636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:27.394804  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:27.394828  346554 cri.go:89] found id: ""
	I1002 07:22:27.394837  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:27.394901  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.398931  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:27.399000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:27.425553  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.425576  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.425582  346554 cri.go:89] found id: ""
	I1002 07:22:27.425590  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:27.425651  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.429400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.433140  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:27.433237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:27.463605  346554 cri.go:89] found id: ""
	I1002 07:22:27.463626  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.463635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:27.463642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:27.463701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:27.493043  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.493074  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.493080  346554 cri.go:89] found id: ""
	I1002 07:22:27.493087  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:27.493145  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.497072  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.500729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:27.500805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:27.531993  346554 cri.go:89] found id: ""
	I1002 07:22:27.532021  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.532031  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:27.532037  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:27.532097  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:27.559232  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.559310  346554 cri.go:89] found id: ""
	I1002 07:22:27.559329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:27.559400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.563624  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:27.563744  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:27.593254  346554 cri.go:89] found id: ""
	I1002 07:22:27.593281  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.593302  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:27.593313  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:27.593328  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.622961  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:27.622992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:27.700292  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:27.700315  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:27.700329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.760790  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:27.760830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.800937  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:27.800976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.879230  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:27.879273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.910457  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:27.910561  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:27.998247  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:27.998287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:28.039823  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:28.039856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:28.148384  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:28.148472  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:28.170086  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:28.170114  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.702644  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:30.713672  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:30.713748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:30.742461  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.742484  346554 cri.go:89] found id: ""
	I1002 07:22:30.742493  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:30.742553  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.746359  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:30.746446  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:30.777229  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:30.777256  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:30.777261  346554 cri.go:89] found id: ""
	I1002 07:22:30.777269  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:30.777345  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.781661  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.785300  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:30.785373  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:30.812435  346554 cri.go:89] found id: ""
	I1002 07:22:30.812465  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.812474  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:30.812481  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:30.812558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:30.839730  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:30.839752  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:30.839758  346554 cri.go:89] found id: ""
	I1002 07:22:30.839765  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:30.839851  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.843582  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.847332  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:30.847414  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:30.877768  346554 cri.go:89] found id: ""
	I1002 07:22:30.877795  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.877804  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:30.877811  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:30.877919  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:30.906930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.906954  346554 cri.go:89] found id: ""
	I1002 07:22:30.906970  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:30.907050  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.911004  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:30.911153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:30.936781  346554 cri.go:89] found id: ""
	I1002 07:22:30.936817  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.936826  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:30.936836  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:30.936849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.963944  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:30.963978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:31.039393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:31.039431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:31.056356  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:31.056396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:31.086443  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:31.086483  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:31.129305  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:31.129342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:31.206518  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:31.206557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:31.246963  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:31.246992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:31.349345  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:31.349380  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:31.424210  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:31.424235  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:31.424247  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:31.494342  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:31.494381  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.028701  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:34.039883  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:34.039955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:34.082124  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.082149  346554 cri.go:89] found id: ""
	I1002 07:22:34.082158  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:34.082222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.086333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:34.086408  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:34.115537  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.115562  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.115568  346554 cri.go:89] found id: ""
	I1002 07:22:34.115575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:34.115632  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.119540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.123109  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:34.123181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:34.149943  346554 cri.go:89] found id: ""
	I1002 07:22:34.149969  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.149978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:34.149985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:34.150098  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:34.177023  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.177044  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.177051  346554 cri.go:89] found id: ""
	I1002 07:22:34.177060  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:34.177117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.180893  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.184341  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:34.184418  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:34.211353  346554 cri.go:89] found id: ""
	I1002 07:22:34.211377  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.211385  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:34.211391  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:34.211449  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:34.237574  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:34.237593  346554 cri.go:89] found id: ""
	I1002 07:22:34.237601  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:34.237659  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.241551  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:34.241626  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:34.272007  346554 cri.go:89] found id: ""
	I1002 07:22:34.272030  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.272039  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:34.272048  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:34.272059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:34.344503  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:34.344540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:34.378151  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:34.378181  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:34.479542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:34.479579  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:34.561912  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:34.561988  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:34.562009  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.627010  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:34.627046  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.675398  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:34.675431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.761258  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:34.761301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:34.783800  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:34.783847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.822817  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:34.822856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.855272  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:34.855298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.390316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:37.401208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:37.401285  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:37.428835  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:37.428857  346554 cri.go:89] found id: ""
	I1002 07:22:37.428864  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:37.428934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.433201  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:37.433276  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:37.461633  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.461664  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.461670  346554 cri.go:89] found id: ""
	I1002 07:22:37.461678  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:37.461736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.465629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.469272  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:37.469348  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:37.498524  346554 cri.go:89] found id: ""
	I1002 07:22:37.498551  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.498561  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:37.498567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:37.498627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:37.535431  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:37.535453  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.535458  346554 cri.go:89] found id: ""
	I1002 07:22:37.535465  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:37.535523  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.539518  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.543351  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:37.543429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:37.569817  346554 cri.go:89] found id: ""
	I1002 07:22:37.569886  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.569912  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:37.569938  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:37.570048  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:37.600094  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.600161  346554 cri.go:89] found id: ""
	I1002 07:22:37.600184  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:37.600279  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.604474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:37.604627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:37.635043  346554 cri.go:89] found id: ""
	I1002 07:22:37.635139  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.635164  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:37.635209  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:37.635241  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:37.652712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:37.652747  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:37.724304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:37.724327  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:37.724343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.778979  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:37.779018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.823368  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:37.823400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.852458  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:37.852487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:37.935415  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:37.935451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:38.032660  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:38.032698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:38.062211  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:38.062292  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:38.141041  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:38.141076  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:38.167504  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:38.167535  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:40.716529  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:40.727155  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:40.727237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:40.759650  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:40.759670  346554 cri.go:89] found id: ""
	I1002 07:22:40.759677  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:40.759739  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.763794  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:40.763891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:40.799428  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:40.799495  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:40.799505  346554 cri.go:89] found id: ""
	I1002 07:22:40.799513  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:40.799587  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.804441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.808181  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:40.808256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:40.839434  346554 cri.go:89] found id: ""
	I1002 07:22:40.839458  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.839466  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:40.839479  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:40.839540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:40.866347  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:40.866368  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:40.866373  346554 cri.go:89] found id: ""
	I1002 07:22:40.866380  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:40.866435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.870243  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.873802  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:40.873887  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:40.915472  346554 cri.go:89] found id: ""
	I1002 07:22:40.915499  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.915508  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:40.915515  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:40.915589  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:40.945530  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:40.945552  346554 cri.go:89] found id: ""
	I1002 07:22:40.945570  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:40.945629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.949410  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:40.949513  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:40.976546  346554 cri.go:89] found id: ""
	I1002 07:22:40.976589  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.976598  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:40.976608  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:40.976620  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:40.993923  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:40.993952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:41.069718  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:41.069746  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:41.069760  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:41.101275  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:41.101313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:41.185486  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:41.185522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:41.213391  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:41.213419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:41.286933  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:41.286973  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:41.325032  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:41.325063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:41.427475  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:41.427517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:41.507722  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:41.507762  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:41.553697  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:41.553731  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.083713  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:44.094946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:44.095050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:44.122939  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.122961  346554 cri.go:89] found id: ""
	I1002 07:22:44.122970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:44.123027  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.126926  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:44.127001  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:44.168228  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.168253  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.168259  346554 cri.go:89] found id: ""
	I1002 07:22:44.168267  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:44.168325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.172203  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.176051  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:44.176154  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:44.207518  346554 cri.go:89] found id: ""
	I1002 07:22:44.207545  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.207554  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:44.207560  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:44.207619  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:44.236177  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:44.236200  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.236206  346554 cri.go:89] found id: ""
	I1002 07:22:44.236214  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:44.236274  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.239868  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.243456  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:44.243575  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:44.269491  346554 cri.go:89] found id: ""
	I1002 07:22:44.269568  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.269596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:44.269612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:44.269687  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:44.295403  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.295423  346554 cri.go:89] found id: ""
	I1002 07:22:44.295431  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:44.295490  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.299440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:44.299555  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:44.333034  346554 cri.go:89] found id: ""
	I1002 07:22:44.333110  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.333136  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:44.333175  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:44.333210  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.364108  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:44.364139  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:44.433101  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:44.433123  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:44.433137  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.489676  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:44.489711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.535780  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:44.535819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.563832  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:44.563862  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:44.644267  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:44.644308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:44.678038  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:44.678077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:44.779429  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:44.779467  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:44.802305  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:44.802335  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.828371  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:44.828400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.412789  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:47.423373  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:47.423464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:47.451136  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.451162  346554 cri.go:89] found id: ""
	I1002 07:22:47.451171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:47.451237  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.455412  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:47.455531  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:47.487387  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:47.487418  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:47.487424  346554 cri.go:89] found id: ""
	I1002 07:22:47.487432  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:47.487491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.491360  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.495265  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:47.495336  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:47.534120  346554 cri.go:89] found id: ""
	I1002 07:22:47.534144  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.534153  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:47.534159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:47.534223  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:47.567581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.567604  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:47.567610  346554 cri.go:89] found id: ""
	I1002 07:22:47.567618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:47.567676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.571558  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.575428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:47.575500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:47.604017  346554 cri.go:89] found id: ""
	I1002 07:22:47.604041  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.604050  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:47.604057  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:47.604178  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:47.631246  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.631266  346554 cri.go:89] found id: ""
	I1002 07:22:47.631275  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:47.631336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.635224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:47.635329  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:47.662879  346554 cri.go:89] found id: ""
	I1002 07:22:47.662906  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.662916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:47.662925  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:47.662969  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:47.758850  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:47.758889  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.787003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:47.787035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.865561  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:47.865598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.894009  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:47.894083  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:47.911472  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:47.911547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:47.992995  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:47.993061  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:47.993095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:48.054795  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:48.054833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:48.105647  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:48.105681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:48.136822  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:48.136852  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:48.221826  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:48.221868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:50.759146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:50.770232  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:50.770304  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:50.808978  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:50.808999  346554 cri.go:89] found id: ""
	I1002 07:22:50.809014  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:50.809071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.812891  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:50.812973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:50.844548  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:50.844621  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:50.844634  346554 cri.go:89] found id: ""
	I1002 07:22:50.844643  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:50.844704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.848854  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.853318  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:50.853395  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:50.879864  346554 cri.go:89] found id: ""
	I1002 07:22:50.879885  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.879894  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:50.879901  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:50.879978  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:50.913482  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:50.913502  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:50.913506  346554 cri.go:89] found id: ""
	I1002 07:22:50.913514  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:50.913571  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.917411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.920913  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:50.920995  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:50.953742  346554 cri.go:89] found id: ""
	I1002 07:22:50.953769  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.953778  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:50.953785  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:50.953849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:50.982216  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:50.982239  346554 cri.go:89] found id: ""
	I1002 07:22:50.982247  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:50.982312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.985960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:50.986036  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:51.023369  346554 cri.go:89] found id: ""
	I1002 07:22:51.023407  346554 logs.go:282] 0 containers: []
	W1002 07:22:51.023416  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:51.023425  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:51.023437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:51.124423  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:51.124471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:51.162362  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:51.162466  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:51.193077  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:51.193120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:51.209317  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:51.209348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:51.286706  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:51.286736  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:51.286768  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:51.314928  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:51.315005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:51.375178  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:51.375216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:51.450324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:51.450368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:51.478495  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:51.478526  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:51.563131  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:51.563178  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.112345  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:54.123567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:54.123643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:54.154215  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.154239  346554 cri.go:89] found id: ""
	I1002 07:22:54.154247  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:54.154306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.158242  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:54.158319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:54.192307  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.192332  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.192343  346554 cri.go:89] found id: ""
	I1002 07:22:54.192351  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:54.192419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.197194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.201582  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:54.201705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:54.228380  346554 cri.go:89] found id: ""
	I1002 07:22:54.228415  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.228425  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:54.228432  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:54.228525  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:54.256056  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.256080  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.256087  346554 cri.go:89] found id: ""
	I1002 07:22:54.256094  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:54.256155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.260143  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.263934  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:54.264008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:54.290214  346554 cri.go:89] found id: ""
	I1002 07:22:54.290241  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.290251  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:54.290256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:54.290314  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:54.319063  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.319117  346554 cri.go:89] found id: ""
	I1002 07:22:54.319126  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:54.319184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.323448  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:54.323547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:54.354341  346554 cri.go:89] found id: ""
	I1002 07:22:54.354366  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.354374  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:54.354384  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:54.354396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.409595  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:54.409633  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.449908  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:54.449944  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.532130  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:54.532170  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.559794  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:54.559822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.593620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:54.593651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:54.700915  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:54.700951  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.727426  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:54.727452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.756226  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:54.756263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:54.841269  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:54.841312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:54.859387  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:54.859425  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:54.940701  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.441672  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:57.453569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:57.453639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:57.483699  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:57.483722  346554 cri.go:89] found id: ""
	I1002 07:22:57.483746  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:57.483845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.487681  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:57.487775  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:57.518495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:57.518520  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:57.518526  346554 cri.go:89] found id: ""
	I1002 07:22:57.518534  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:57.518593  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.522615  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.526448  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:57.526523  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:57.553219  346554 cri.go:89] found id: ""
	I1002 07:22:57.553246  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.553255  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:57.553263  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:57.553327  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:57.582109  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.582132  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.582137  346554 cri.go:89] found id: ""
	I1002 07:22:57.582146  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:57.582209  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.586222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.590675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:57.590752  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:57.621475  346554 cri.go:89] found id: ""
	I1002 07:22:57.621544  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.621567  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:57.621592  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:57.621680  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:57.647238  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:57.647304  346554 cri.go:89] found id: ""
	I1002 07:22:57.647329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:57.647425  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.651299  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:57.651391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:57.681221  346554 cri.go:89] found id: ""
	I1002 07:22:57.681298  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.681324  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:57.681350  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:57.681387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.757042  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:57.757079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.789483  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:57.789519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:57.876258  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:57.876301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:57.909957  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:57.909986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:57.994768  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.994790  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:57.994804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:58.057805  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:58.057845  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:58.093196  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:58.093227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:58.192017  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:58.192055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:58.209558  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:58.209587  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:58.236404  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:58.236433  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.781745  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:00.796477  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:00.796552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:00.823241  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:00.823265  346554 cri.go:89] found id: ""
	I1002 07:23:00.823273  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:00.823327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.827586  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:00.827675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:00.862251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:00.862274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.862280  346554 cri.go:89] found id: ""
	I1002 07:23:00.862287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:00.862348  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.866453  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.870120  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:00.870189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:00.910250  346554 cri.go:89] found id: ""
	I1002 07:23:00.910318  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.910341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:00.910366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:00.910451  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:00.939142  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:00.939208  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:00.939234  346554 cri.go:89] found id: ""
	I1002 07:23:00.939243  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:00.939300  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.943281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.947110  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:00.947180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:00.979402  346554 cri.go:89] found id: ""
	I1002 07:23:00.979431  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.979444  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:00.979452  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:00.979518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:01.016038  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.016103  346554 cri.go:89] found id: ""
	I1002 07:23:01.016131  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:01.016225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:01.020366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:01.020520  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:01.049712  346554 cri.go:89] found id: ""
	I1002 07:23:01.049780  346554 logs.go:282] 0 containers: []
	W1002 07:23:01.049803  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:01.049831  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:01.049870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:01.101253  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:01.101287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:01.200014  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:01.200053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:01.277860  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:01.277885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:01.277898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:01.341507  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:01.341545  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:01.413278  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:01.413313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:01.446875  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:01.446914  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.475436  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:01.475464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:01.551813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:01.551853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:01.585150  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:01.585187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:01.601574  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:01.601606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.131042  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:04.142520  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:04.142634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:04.176669  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.176692  346554 cri.go:89] found id: ""
	I1002 07:23:04.176701  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:04.176763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.180972  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:04.181051  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:04.208821  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.208846  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.208851  346554 cri.go:89] found id: ""
	I1002 07:23:04.208859  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:04.208925  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.213191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.217006  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:04.217129  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:04.245751  346554 cri.go:89] found id: ""
	I1002 07:23:04.245775  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.245790  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:04.245798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:04.245859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:04.284664  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.284685  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.284689  346554 cri.go:89] found id: ""
	I1002 07:23:04.284697  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:04.284756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.288986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.292617  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:04.292700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:04.320145  346554 cri.go:89] found id: ""
	I1002 07:23:04.320171  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.320180  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:04.320187  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:04.320245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:04.347600  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.347622  346554 cri.go:89] found id: ""
	I1002 07:23:04.347631  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:04.347686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.351440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:04.351511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:04.383653  346554 cri.go:89] found id: ""
	I1002 07:23:04.383732  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.383749  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:04.383759  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:04.383775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.440177  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:04.440218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.468956  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:04.469027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:04.545741  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:04.545780  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:04.579865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:04.579895  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:04.681656  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:04.681695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:04.752352  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:04.752373  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:04.752387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.793420  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:04.793493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.864258  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:04.864293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.893921  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:04.894006  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:04.911663  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:04.911693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.444239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:07.455140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:07.455218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:07.484101  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.484124  346554 cri.go:89] found id: ""
	I1002 07:23:07.484133  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:07.484189  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.488067  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:07.488145  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:07.522958  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.523021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.523044  346554 cri.go:89] found id: ""
	I1002 07:23:07.523071  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:07.523194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.527249  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.531022  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:07.531124  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:07.557498  346554 cri.go:89] found id: ""
	I1002 07:23:07.557519  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.557528  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:07.557535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:07.557609  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:07.584061  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.584092  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.584096  346554 cri.go:89] found id: ""
	I1002 07:23:07.584105  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:07.584170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.587957  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.591564  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:07.591639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:07.619944  346554 cri.go:89] found id: ""
	I1002 07:23:07.619971  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.619980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:07.619987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:07.620050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:07.648834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:07.648855  346554 cri.go:89] found id: ""
	I1002 07:23:07.648863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:07.648919  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.652819  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:07.652937  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:07.682396  346554 cri.go:89] found id: ""
	I1002 07:23:07.682421  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.682430  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:07.682439  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:07.682452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:07.751625  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:07.751650  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:07.751667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.778524  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:07.778551  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.850872  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:07.850910  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.887246  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:07.887283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.959701  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:07.959738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.989632  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:07.989661  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:08.009848  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:08.009885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:08.041024  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:08.041052  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:08.120762  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:08.120798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:08.174204  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:08.174234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:10.791227  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:10.804748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:10.804834  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:10.833209  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:10.833256  346554 cri.go:89] found id: ""
	I1002 07:23:10.833264  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:10.833327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.837233  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:10.837307  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:10.867407  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:10.867431  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:10.867436  346554 cri.go:89] found id: ""
	I1002 07:23:10.867444  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:10.867501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.871289  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.874962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:10.875041  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:10.909346  346554 cri.go:89] found id: ""
	I1002 07:23:10.909372  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.909381  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:10.909388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:10.909444  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:10.944052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:10.944127  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:10.944152  346554 cri.go:89] found id: ""
	I1002 07:23:10.944181  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:10.944285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.952530  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.957003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:10.957085  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:10.984253  346554 cri.go:89] found id: ""
	I1002 07:23:10.984287  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.984297  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:10.984321  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:10.984401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:11.018350  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.018417  346554 cri.go:89] found id: ""
	I1002 07:23:11.018442  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:11.018520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:11.022612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:11.022707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:11.054294  346554 cri.go:89] found id: ""
	I1002 07:23:11.054371  346554 logs.go:282] 0 containers: []
	W1002 07:23:11.054394  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:11.054437  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:11.054471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:11.132821  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:11.132846  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:11.132859  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:11.161373  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:11.161401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:11.219899  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:11.219936  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.250524  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:11.250554  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:11.282533  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:11.282564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:11.385870  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:11.385909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:11.402968  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:11.402997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:11.447948  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:11.447983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:11.521218  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:11.521256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:11.551246  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:11.551320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.129146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:14.140212  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:14.140315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:14.167561  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:14.167585  346554 cri.go:89] found id: ""
	I1002 07:23:14.167593  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:14.167691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.171728  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:14.171841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:14.198571  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.198594  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.198600  346554 cri.go:89] found id: ""
	I1002 07:23:14.198607  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:14.198693  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.202658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.207962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:14.208057  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:14.233944  346554 cri.go:89] found id: ""
	I1002 07:23:14.233970  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.233979  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:14.233986  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:14.234064  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:14.264854  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.264878  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.264884  346554 cri.go:89] found id: ""
	I1002 07:23:14.264892  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:14.264948  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.268797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.272677  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:14.272756  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:14.304992  346554 cri.go:89] found id: ""
	I1002 07:23:14.305031  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.305041  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:14.305047  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:14.305120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:14.335500  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.335570  346554 cri.go:89] found id: ""
	I1002 07:23:14.335593  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:14.335684  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.339428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:14.339502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:14.366928  346554 cri.go:89] found id: ""
	I1002 07:23:14.366954  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.366964  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:14.366973  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:14.366984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.441765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:14.441808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.473510  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:14.473541  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.552162  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:14.552201  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:14.586130  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:14.586160  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:14.602135  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:14.602164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.638523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:14.638557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.717772  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:14.717808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.748211  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:14.748283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:14.848964  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:14.849003  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:14.926254  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:14.926277  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:14.926290  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.456912  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:17.467889  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:17.467979  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:17.495434  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.495457  346554 cri.go:89] found id: ""
	I1002 07:23:17.495466  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:17.495524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.499591  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:17.499663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:17.535737  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.535757  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:17.535761  346554 cri.go:89] found id: ""
	I1002 07:23:17.535768  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:17.535826  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.540069  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.543817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:17.543891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:17.573877  346554 cri.go:89] found id: ""
	I1002 07:23:17.573907  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.573917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:17.573923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:17.573989  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:17.609297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.609320  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:17.609326  346554 cri.go:89] found id: ""
	I1002 07:23:17.609333  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:17.609390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.613640  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.617183  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:17.617253  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:17.647944  346554 cri.go:89] found id: ""
	I1002 07:23:17.647971  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.647980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:17.647987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:17.648045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:17.674528  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:17.674552  346554 cri.go:89] found id: ""
	I1002 07:23:17.674561  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:17.674617  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.678979  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:17.679143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:17.706803  346554 cri.go:89] found id: ""
	I1002 07:23:17.706828  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.706837  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:17.706846  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:17.706857  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:17.801171  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:17.801207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:17.817922  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:17.817952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.889064  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:17.889103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.971481  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:17.971518  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:18.051668  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:18.051712  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:18.090695  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:18.090723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:18.162304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:18.162328  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:18.162343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:18.194200  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:18.194233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:18.231522  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:18.231557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:18.263215  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:18.263246  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:20.795234  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:20.807871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:20.807939  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:20.839049  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:20.839070  346554 cri.go:89] found id: ""
	I1002 07:23:20.839098  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:20.839172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.842946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:20.843023  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:20.873446  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:20.873469  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:20.873475  346554 cri.go:89] found id: ""
	I1002 07:23:20.873484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:20.873540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.877435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.881337  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:20.881415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:20.918940  346554 cri.go:89] found id: ""
	I1002 07:23:20.918971  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.918980  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:20.918987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:20.919046  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:20.951052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:20.951075  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:20.951112  346554 cri.go:89] found id: ""
	I1002 07:23:20.951120  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:20.951185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.955805  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.959649  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:20.959737  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:20.987685  346554 cri.go:89] found id: ""
	I1002 07:23:20.987710  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.987719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:20.987726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:20.987792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:21.028577  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.028602  346554 cri.go:89] found id: ""
	I1002 07:23:21.028622  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:21.028683  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:21.032899  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:21.032977  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:21.062654  346554 cri.go:89] found id: ""
	I1002 07:23:21.062679  346554 logs.go:282] 0 containers: []
	W1002 07:23:21.062688  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:21.062698  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:21.062710  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:21.091027  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:21.091059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:21.159267  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:21.159307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:21.231814  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:21.231856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:21.263174  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:21.263205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:21.310161  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:21.310194  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:21.349961  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:21.349997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.379224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:21.379306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:21.454682  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:21.454722  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:21.560920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:21.560960  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:21.578179  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:21.578211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:21.668218  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.169201  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:24.181390  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:24.181463  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:24.213873  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:24.213896  346554 cri.go:89] found id: ""
	I1002 07:23:24.213905  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:24.213963  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.217730  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:24.217807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:24.252439  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.252471  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.252476  346554 cri.go:89] found id: ""
	I1002 07:23:24.252484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:24.252567  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.256307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.260273  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:24.260349  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:24.287826  346554 cri.go:89] found id: ""
	I1002 07:23:24.287852  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.287862  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:24.287870  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:24.287973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:24.315859  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.315884  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.315890  346554 cri.go:89] found id: ""
	I1002 07:23:24.315897  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:24.315975  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.319993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.323777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:24.323877  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:24.354601  346554 cri.go:89] found id: ""
	I1002 07:23:24.354631  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.354642  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:24.354648  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:24.354730  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:24.384370  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.384395  346554 cri.go:89] found id: ""
	I1002 07:23:24.384403  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:24.384488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.388615  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:24.388695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:24.415488  346554 cri.go:89] found id: ""
	I1002 07:23:24.415514  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.415523  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:24.415533  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:24.415546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.458158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:24.458192  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.534624  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:24.534667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.567982  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:24.568016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.596275  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:24.596306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:24.674293  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:24.674334  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:24.777997  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:24.778039  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:24.801006  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:24.801036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.862265  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:24.862303  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:24.913721  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:24.913755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:24.991414  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.991443  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:24.991458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.525665  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:27.536783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:27.536869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:27.563440  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.563507  346554 cri.go:89] found id: ""
	I1002 07:23:27.563531  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:27.563623  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.568154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:27.568278  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:27.597184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:27.597205  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:27.597211  346554 cri.go:89] found id: ""
	I1002 07:23:27.597230  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:27.597306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.601073  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.604808  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:27.604880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:27.635124  346554 cri.go:89] found id: ""
	I1002 07:23:27.635147  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.635155  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:27.635161  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:27.635220  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:27.662383  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:27.662455  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:27.662474  346554 cri.go:89] found id: ""
	I1002 07:23:27.662500  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:27.662607  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.666537  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.670164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:27.670238  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:27.697001  346554 cri.go:89] found id: ""
	I1002 07:23:27.697028  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.697037  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:27.697044  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:27.697127  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:27.722638  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:27.722662  346554 cri.go:89] found id: ""
	I1002 07:23:27.722672  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:27.722728  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.726512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:27.726591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:27.755270  346554 cri.go:89] found id: ""
	I1002 07:23:27.755300  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.755309  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:27.755319  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:27.755330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:27.854338  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:27.854379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:27.928550  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:27.928577  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:27.928590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.960015  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:27.960047  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:28.025647  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:28.025706  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:28.064089  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:28.064125  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:28.158385  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:28.158423  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:28.196505  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:28.196533  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:28.215893  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:28.215921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:28.246774  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:28.246821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:28.274010  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:28.274036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:30.852724  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:30.863588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:30.863660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:30.891349  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:30.891371  346554 cri.go:89] found id: ""
	I1002 07:23:30.891380  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:30.891457  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.895249  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:30.895343  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:30.922333  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:30.922356  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:30.922361  346554 cri.go:89] found id: ""
	I1002 07:23:30.922368  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:30.922423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.926269  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.929885  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:30.929957  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:30.956216  346554 cri.go:89] found id: ""
	I1002 07:23:30.956253  346554 logs.go:282] 0 containers: []
	W1002 07:23:30.956269  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:30.956285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:30.956347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:30.984076  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:30.984101  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:30.984107  346554 cri.go:89] found id: ""
	I1002 07:23:30.984121  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:30.984182  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.988082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.991650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:30.991741  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:31.028148  346554 cri.go:89] found id: ""
	I1002 07:23:31.028174  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.028184  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:31.028190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:31.028274  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:31.057090  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.057116  346554 cri.go:89] found id: ""
	I1002 07:23:31.057125  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:31.057195  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:31.064614  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:31.064695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:31.096928  346554 cri.go:89] found id: ""
	I1002 07:23:31.096996  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.097022  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:31.097042  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:31.097069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:31.155662  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:31.155701  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:31.202926  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:31.202958  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:31.236483  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:31.236508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:31.341179  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:31.341216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:31.368996  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:31.369022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:31.449499  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:31.449539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.476326  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:31.476354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:31.561871  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:31.561909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:31.597214  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:31.597243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:31.614646  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:31.614674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:31.686141  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.187051  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:34.198084  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:34.198163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:34.225977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.226000  346554 cri.go:89] found id: ""
	I1002 07:23:34.226009  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:34.226094  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.230977  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:34.231053  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:34.258817  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.258840  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:34.258845  346554 cri.go:89] found id: ""
	I1002 07:23:34.258853  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:34.258908  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.262894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.266671  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:34.266772  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:34.296183  346554 cri.go:89] found id: ""
	I1002 07:23:34.296207  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.296217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:34.296223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:34.296283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:34.329604  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.329678  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.329698  346554 cri.go:89] found id: ""
	I1002 07:23:34.329722  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:34.329830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.333641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.337102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:34.337170  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:34.365600  346554 cri.go:89] found id: ""
	I1002 07:23:34.365626  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.365636  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:34.365645  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:34.365708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:34.393323  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.393347  346554 cri.go:89] found id: ""
	I1002 07:23:34.393357  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:34.393439  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.397338  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:34.397411  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:34.423876  346554 cri.go:89] found id: ""
	I1002 07:23:34.423899  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.423908  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:34.423918  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:34.423934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.453221  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:34.453251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.481067  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:34.481095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:34.558614  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:34.558651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:34.601917  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:34.601948  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:34.705602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:34.705637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:34.769442  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.769466  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:34.769478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.808589  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:34.808615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.869982  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:34.870024  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.959694  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:34.959739  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:34.976284  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:34.976319  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.518488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:37.530159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:37.530242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:37.557004  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:37.557026  346554 cri.go:89] found id: ""
	I1002 07:23:37.557035  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:37.557091  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.560903  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:37.560976  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:37.593556  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.593580  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.593586  346554 cri.go:89] found id: ""
	I1002 07:23:37.593594  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:37.593652  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.597692  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.601598  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:37.601672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:37.628723  346554 cri.go:89] found id: ""
	I1002 07:23:37.628751  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.628761  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:37.628767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:37.628832  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:37.656989  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:37.657010  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:37.657014  346554 cri.go:89] found id: ""
	I1002 07:23:37.657022  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:37.657090  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.660940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.664730  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:37.664810  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:37.690545  346554 cri.go:89] found id: ""
	I1002 07:23:37.690567  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.690575  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:37.690582  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:37.690638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:37.718139  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:37.718164  346554 cri.go:89] found id: ""
	I1002 07:23:37.718173  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:37.718239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.722013  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:37.722130  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:37.748320  346554 cri.go:89] found id: ""
	I1002 07:23:37.748387  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.748410  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:37.748439  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:37.748478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:37.848896  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:37.848937  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:37.935000  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:37.935035  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:37.935050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.998904  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:37.998949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:38.039239  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:38.039274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:38.133839  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:38.133878  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:38.164590  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:38.164617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:38.247363  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:38.247401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:38.263025  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:38.263053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:38.292185  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:38.292215  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:38.324631  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:38.324662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:40.856053  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:40.866969  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:40.867037  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:40.908779  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:40.908802  346554 cri.go:89] found id: ""
	I1002 07:23:40.908811  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:40.908882  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.912652  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:40.912724  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:40.938681  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:40.938711  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:40.938717  346554 cri.go:89] found id: ""
	I1002 07:23:40.938725  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:40.938780  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.942512  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.945790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:40.945860  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:40.973961  346554 cri.go:89] found id: ""
	I1002 07:23:40.974043  346554 logs.go:282] 0 containers: []
	W1002 07:23:40.974067  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:40.974093  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:40.974208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:41.001128  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.001152  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.001158  346554 cri.go:89] found id: ""
	I1002 07:23:41.001165  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:41.001239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.007592  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.012525  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:41.012642  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:41.044447  346554 cri.go:89] found id: ""
	I1002 07:23:41.044521  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.044545  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:41.044571  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:41.044654  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:41.083149  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.083216  346554 cri.go:89] found id: ""
	I1002 07:23:41.083250  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:41.083338  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.087534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:41.087663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:41.118406  346554 cri.go:89] found id: ""
	I1002 07:23:41.118470  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.118494  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:41.118528  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:41.118559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.195975  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:41.196011  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.227140  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:41.227172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:41.313141  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:41.313180  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:41.416180  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:41.416218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:41.459495  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:41.459536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.488753  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:41.488785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:41.532527  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:41.532560  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:41.548856  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:41.548885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:41.618600  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:41.618624  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:41.618638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:41.646628  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:41.646656  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.221221  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:44.231877  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:44.231950  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:44.257682  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.257714  346554 cri.go:89] found id: ""
	I1002 07:23:44.257724  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:44.257781  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.261470  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:44.261568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:44.291709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.291732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.291738  346554 cri.go:89] found id: ""
	I1002 07:23:44.291749  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:44.291806  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.295774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.299744  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:44.299891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:44.326325  346554 cri.go:89] found id: ""
	I1002 07:23:44.326361  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.326372  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:44.326396  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:44.326476  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:44.353658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.353682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.353687  346554 cri.go:89] found id: ""
	I1002 07:23:44.353694  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:44.353752  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.357660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.361374  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:44.361448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:44.390237  346554 cri.go:89] found id: ""
	I1002 07:23:44.390271  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.390281  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:44.390287  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:44.390356  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:44.421420  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.421444  346554 cri.go:89] found id: ""
	I1002 07:23:44.421453  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:44.421520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.425406  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:44.425480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:44.453498  346554 cri.go:89] found id: ""
	I1002 07:23:44.453575  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.453599  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:44.453627  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:44.453663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:44.469406  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:44.469489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:44.537881  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:44.537947  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:44.537976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.566669  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:44.566750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.626234  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:44.626311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.663981  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:44.664015  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.743176  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:44.743211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.769609  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:44.769637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:44.850618  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:44.850654  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:44.956047  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:44.956089  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.988388  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:44.988421  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:47.617924  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:47.629050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:47.629142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:47.657724  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:47.657747  346554 cri.go:89] found id: ""
	I1002 07:23:47.657756  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:47.657814  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.661805  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:47.661878  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:47.691884  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:47.691906  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:47.691911  346554 cri.go:89] found id: ""
	I1002 07:23:47.691919  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:47.691978  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.695983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.699611  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:47.699685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:47.731628  346554 cri.go:89] found id: ""
	I1002 07:23:47.731654  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.731664  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:47.731671  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:47.731732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:47.760694  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:47.760718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:47.760723  346554 cri.go:89] found id: ""
	I1002 07:23:47.760731  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:47.760830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.764776  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.768282  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:47.768363  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:47.800941  346554 cri.go:89] found id: ""
	I1002 07:23:47.800967  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.800976  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:47.800982  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:47.801049  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:47.828847  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.828870  346554 cri.go:89] found id: ""
	I1002 07:23:47.828879  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:47.828955  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.832777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:47.832850  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:47.861095  346554 cri.go:89] found id: ""
	I1002 07:23:47.861122  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.861131  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:47.861141  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:47.861184  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.893617  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:47.893649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:47.990939  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:47.990977  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:48.007073  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:48.007153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:48.043757  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:48.043786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:48.136713  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:48.136750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:48.168119  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:48.168151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:48.251880  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:48.251919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:48.285530  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:48.285566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:48.357500  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:48.357522  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:48.357537  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:48.403215  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:48.403293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.006650  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:51.028354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:51.028471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:51.057229  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.057253  346554 cri.go:89] found id: ""
	I1002 07:23:51.057262  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:51.057329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.061731  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:51.061807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:51.089750  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.089772  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.089778  346554 cri.go:89] found id: ""
	I1002 07:23:51.089785  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:51.089848  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.094055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.097989  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:51.098090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:51.125460  346554 cri.go:89] found id: ""
	I1002 07:23:51.125487  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.125510  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:51.125536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:51.125611  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:51.155658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.155684  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.155689  346554 cri.go:89] found id: ""
	I1002 07:23:51.155698  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:51.155757  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.159937  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.164562  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:51.164639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:51.194590  346554 cri.go:89] found id: ""
	I1002 07:23:51.194626  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.194635  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:51.194642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:51.194720  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:51.230400  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.230424  346554 cri.go:89] found id: ""
	I1002 07:23:51.230433  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:51.230501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.235241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:51.235335  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:51.264526  346554 cri.go:89] found id: ""
	I1002 07:23:51.264551  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.264562  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:51.264573  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:51.264603  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.292045  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:51.292128  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.377066  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:51.377104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.408242  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:51.408273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.437071  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:51.437100  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:51.508699  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:51.508723  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:51.508736  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.594052  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:51.594094  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.631968  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:51.632002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:51.710908  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:51.710950  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:51.751275  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:51.751309  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:51.859428  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:51.859510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.376917  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:54.388247  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:54.388322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:54.417539  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.417563  346554 cri.go:89] found id: ""
	I1002 07:23:54.417571  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:54.417634  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.421536  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:54.421612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:54.452318  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.452342  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:54.452347  346554 cri.go:89] found id: ""
	I1002 07:23:54.452355  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:54.452410  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.457434  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.460992  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:54.461070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:54.494010  346554 cri.go:89] found id: ""
	I1002 07:23:54.494031  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.494040  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:54.494045  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:54.494107  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:54.528280  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:54.528300  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.528305  346554 cri.go:89] found id: ""
	I1002 07:23:54.528312  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:54.528369  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.532283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.535876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:54.535946  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:54.564214  346554 cri.go:89] found id: ""
	I1002 07:23:54.564240  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.564250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:54.564256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:54.564347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:54.594060  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:54.594084  346554 cri.go:89] found id: ""
	I1002 07:23:54.594093  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:54.594169  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.598344  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:54.598442  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:54.632402  346554 cri.go:89] found id: ""
	I1002 07:23:54.632426  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.632435  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:54.632445  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:54.632500  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:54.729477  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:54.729517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:54.800743  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:54.800815  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:54.800846  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.861032  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:54.861069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.889171  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:54.889244  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:54.925585  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:54.925615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.941174  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:54.941202  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.969205  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:54.969235  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:55.020047  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:55.020087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:55.098725  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:55.098805  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:55.132210  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:55.132239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:57.716428  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:57.730713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:57.730787  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:57.757853  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:57.757878  346554 cri.go:89] found id: ""
	I1002 07:23:57.757887  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:57.757943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.761971  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:57.762045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:57.790866  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:57.790891  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.790897  346554 cri.go:89] found id: ""
	I1002 07:23:57.790904  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:57.790962  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.795621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.799575  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:57.799653  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:57.830281  346554 cri.go:89] found id: ""
	I1002 07:23:57.830307  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.830317  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:57.830323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:57.830382  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:57.858397  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:57.858420  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:57.858425  346554 cri.go:89] found id: ""
	I1002 07:23:57.858433  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:57.858488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.862244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.865851  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:57.865951  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:57.893160  346554 cri.go:89] found id: ""
	I1002 07:23:57.893234  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.893250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:57.893258  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:57.893318  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:57.920413  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:57.920499  346554 cri.go:89] found id: ""
	I1002 07:23:57.920516  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:57.920585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.924327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:57.924423  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:57.951174  346554 cri.go:89] found id: ""
	I1002 07:23:57.951197  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.951206  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:57.951216  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:57.951268  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.986550  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:57.986632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:58.017224  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:58.017260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:58.122339  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:58.122377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:58.138465  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:58.138494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:58.168292  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:58.168317  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:58.230852  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:58.230890  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:58.328715  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:58.328764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:58.357761  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:58.357792  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:58.444436  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:58.444482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:58.478280  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:58.478306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:58.560395  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.061663  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:01.077726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:01.077804  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:01.106834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:01.106860  346554 cri.go:89] found id: ""
	I1002 07:24:01.106869  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:01.106940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.110940  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:01.111014  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:01.139370  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.139392  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.139397  346554 cri.go:89] found id: ""
	I1002 07:24:01.139404  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:01.139466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.143857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.148114  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:01.148207  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:01.178376  346554 cri.go:89] found id: ""
	I1002 07:24:01.178468  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.178493  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:01.178522  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:01.178635  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:01.208075  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.208098  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.208103  346554 cri.go:89] found id: ""
	I1002 07:24:01.208111  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:01.208178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.212014  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.216098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:01.216233  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:01.245384  346554 cri.go:89] found id: ""
	I1002 07:24:01.245424  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.245434  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:01.245440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:01.245503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:01.282247  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.282322  346554 cri.go:89] found id: ""
	I1002 07:24:01.282346  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:01.282443  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.288826  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:01.288905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:01.319901  346554 cri.go:89] found id: ""
	I1002 07:24:01.319926  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.319934  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:01.319943  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:01.319956  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.389606  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:01.389692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.444021  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:01.444055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.526762  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:01.526804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.559019  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:01.559049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:01.634782  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:01.634818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:01.709026  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.709100  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:01.709120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.738970  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:01.739000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:01.770329  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:01.770364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:01.884154  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:01.884232  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:01.902364  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:01.902390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.435943  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:04.447669  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:04.447785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:04.478942  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.478965  346554 cri.go:89] found id: ""
	I1002 07:24:04.478974  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:04.479030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.483417  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:04.483511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:04.518294  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:04.518320  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.518325  346554 cri.go:89] found id: ""
	I1002 07:24:04.518334  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:04.518388  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.522223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.526427  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:04.526558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:04.558950  346554 cri.go:89] found id: ""
	I1002 07:24:04.558987  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.558996  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:04.559003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:04.559153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:04.586620  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:04.586645  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:04.586650  346554 cri.go:89] found id: ""
	I1002 07:24:04.586658  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:04.586737  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.590676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.594540  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:04.594644  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:04.621686  346554 cri.go:89] found id: ""
	I1002 07:24:04.621709  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.621719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:04.621725  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:04.621781  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:04.649834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:04.649855  346554 cri.go:89] found id: ""
	I1002 07:24:04.649863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:04.649944  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.654335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:04.654436  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:04.687143  346554 cri.go:89] found id: ""
	I1002 07:24:04.687166  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.687175  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:04.687184  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:04.687216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.715298  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:04.715329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.758402  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:04.758436  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:04.838751  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:04.838789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:04.870372  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:04.870403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:04.984168  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:04.984207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:04.999826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:04.999858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:05.088672  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:05.088696  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:05.088709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:05.150024  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:05.150063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:05.226780  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:05.226819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:05.255567  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:05.255605  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.791197  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:07.803594  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:07.803689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:07.833077  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:07.833103  346554 cri.go:89] found id: ""
	I1002 07:24:07.833113  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:07.833214  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.837537  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:07.837661  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:07.866899  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:07.866926  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:07.866932  346554 cri.go:89] found id: ""
	I1002 07:24:07.866939  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:07.867000  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.870759  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.874593  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:07.874713  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:07.903524  346554 cri.go:89] found id: ""
	I1002 07:24:07.903587  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.903620  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:07.903644  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:07.903738  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:07.934472  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:07.934547  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:07.934567  346554 cri.go:89] found id: ""
	I1002 07:24:07.934593  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:07.934688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.938660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.942349  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:07.942453  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:07.969924  346554 cri.go:89] found id: ""
	I1002 07:24:07.969947  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.969956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:07.969964  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:07.970022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:07.998801  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.998826  346554 cri.go:89] found id: ""
	I1002 07:24:07.998834  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:07.998890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:08.006051  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:08.006218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:08.043683  346554 cri.go:89] found id: ""
	I1002 07:24:08.043712  346554 logs.go:282] 0 containers: []
	W1002 07:24:08.043723  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:08.043733  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:08.043746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:08.094506  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:08.094546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:08.175873  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:08.175912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:08.208161  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:08.208191  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:08.234954  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:08.234983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:08.301287  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:08.301325  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:08.377087  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:08.377123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:08.405378  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:08.405407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:08.431355  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:08.431386  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:08.536433  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:08.536479  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:08.553542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:08.553575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:08.621305  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.122975  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:11.135150  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:11.135231  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:11.168608  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.168633  346554 cri.go:89] found id: ""
	I1002 07:24:11.168642  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:11.168704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.172810  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:11.172893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:11.204325  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.204401  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.204413  346554 cri.go:89] found id: ""
	I1002 07:24:11.204422  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:11.204491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.208514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.212208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:11.212287  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:11.245698  346554 cri.go:89] found id: ""
	I1002 07:24:11.245725  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.245736  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:11.245743  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:11.245805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:11.274196  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.274219  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:11.274224  346554 cri.go:89] found id: ""
	I1002 07:24:11.274231  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:11.274292  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.278411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.282735  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:11.282813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:11.322108  346554 cri.go:89] found id: ""
	I1002 07:24:11.322129  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.322138  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:11.322144  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:11.322203  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:11.350582  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.350647  346554 cri.go:89] found id: ""
	I1002 07:24:11.350659  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:11.350715  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.354559  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:11.354628  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:11.386834  346554 cri.go:89] found id: ""
	I1002 07:24:11.386899  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.386923  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:11.386951  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:11.386981  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:11.465595  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:11.465632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.541894  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:11.541933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.619365  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:11.619408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.647305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:11.647336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:11.686923  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:11.686952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:11.792344  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:11.792440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:11.814593  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:11.814623  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:11.895211  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.895236  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:11.895250  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.921556  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:11.921586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.957833  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:11.957872  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.490490  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:14.502377  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:14.502482  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:14.534162  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.534185  346554 cri.go:89] found id: ""
	I1002 07:24:14.534205  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:14.534262  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.538631  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:14.538701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:14.568427  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.568450  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.568456  346554 cri.go:89] found id: ""
	I1002 07:24:14.568463  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:14.568527  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.572917  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.576683  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:14.576760  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:14.604778  346554 cri.go:89] found id: ""
	I1002 07:24:14.604809  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.604819  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:14.604825  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:14.604932  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:14.631788  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:14.631812  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.631817  346554 cri.go:89] found id: ""
	I1002 07:24:14.631824  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:14.631887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.635951  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.639653  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:14.639769  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:14.682797  346554 cri.go:89] found id: ""
	I1002 07:24:14.682823  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.682832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:14.682839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:14.682899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:14.722146  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:14.722175  346554 cri.go:89] found id: ""
	I1002 07:24:14.722183  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:14.722239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.727035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:14.727164  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:14.759413  346554 cri.go:89] found id: ""
	I1002 07:24:14.759438  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.759447  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:14.759458  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:14.759470  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.786929  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:14.787000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.853005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:14.853042  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.899040  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:14.899071  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:15.004708  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:15.004742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:15.123051  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:15.123106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:15.154325  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:15.154357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:15.183161  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:15.183248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:15.265975  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:15.266013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:15.299575  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:15.299607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:15.315427  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:15.315454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:15.394115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:17.895569  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:17.909876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:17.909985  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:17.941059  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:17.941083  346554 cri.go:89] found id: ""
	I1002 07:24:17.941092  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:17.941159  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.945318  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:17.945401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:17.973722  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:17.973743  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:17.973747  346554 cri.go:89] found id: ""
	I1002 07:24:17.973755  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:17.973813  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.978340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.983135  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:17.983214  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:18.024398  346554 cri.go:89] found id: ""
	I1002 07:24:18.024424  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.024433  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:18.024440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:18.024518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:18.053513  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.053535  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.053540  346554 cri.go:89] found id: ""
	I1002 07:24:18.053548  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:18.053631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.057706  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.061744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:18.061820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:18.093847  346554 cri.go:89] found id: ""
	I1002 07:24:18.093873  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.093884  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:18.093891  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:18.093956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:18.123256  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.123283  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.123289  346554 cri.go:89] found id: ""
	I1002 07:24:18.123296  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:18.123355  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.127263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.131206  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:18.131284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:18.157688  346554 cri.go:89] found id: ""
	I1002 07:24:18.157714  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.157724  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:18.157733  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:18.157745  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:18.203920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:18.203946  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:18.220036  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:18.220064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:18.288859  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:18.288885  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:18.288898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:18.326029  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:18.326064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.410880  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:18.410919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:18.516955  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:18.516994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:18.548753  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:18.548786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:18.613812  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:18.613849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.643416  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:18.643444  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.670170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:18.670199  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.699194  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:18.699231  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.274356  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:21.285713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:21.285785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:21.312389  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.312413  346554 cri.go:89] found id: ""
	I1002 07:24:21.312427  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:21.312492  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.316212  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:21.316290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:21.341368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:21.341390  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.341396  346554 cri.go:89] found id: ""
	I1002 07:24:21.341403  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:21.341458  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.345157  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.348764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:21.348841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:21.381263  346554 cri.go:89] found id: ""
	I1002 07:24:21.381292  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.381302  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:21.381308  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:21.381366  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:21.412001  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:21.412022  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.412027  346554 cri.go:89] found id: ""
	I1002 07:24:21.412035  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:21.412092  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.415991  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.419745  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:21.419818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:21.448790  346554 cri.go:89] found id: ""
	I1002 07:24:21.448817  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.448826  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:21.448832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:21.448894  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:21.476863  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.476885  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.476890  346554 cri.go:89] found id: ""
	I1002 07:24:21.476897  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:21.476995  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.481180  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.484939  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:21.485015  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:21.518979  346554 cri.go:89] found id: ""
	I1002 07:24:21.519005  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.519014  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:21.519023  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:21.519035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.548837  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:21.548868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.577649  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:21.577678  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.614505  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:21.614538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.648602  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:21.648630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.730478  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:21.730515  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:21.770385  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:21.770420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:21.869953  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:21.869990  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:21.890825  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:21.890864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:21.963492  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:21.963514  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:21.963531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.990531  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:21.990559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:22.069923  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:22.070005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.652448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:24.663850  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:24.663928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:24.691270  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:24.691349  346554 cri.go:89] found id: ""
	I1002 07:24:24.691385  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:24.691483  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.695776  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:24.695846  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:24.722540  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:24.722563  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:24.722568  346554 cri.go:89] found id: ""
	I1002 07:24:24.722575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:24.722641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.726529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.730111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:24.730184  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:24.760973  346554 cri.go:89] found id: ""
	I1002 07:24:24.760999  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.761009  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:24.761015  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:24.761096  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:24.788682  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.788702  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:24.788707  346554 cri.go:89] found id: ""
	I1002 07:24:24.788714  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:24.788771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.795284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.800831  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:24.800927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:24.826399  346554 cri.go:89] found id: ""
	I1002 07:24:24.826434  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.826443  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:24.826464  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:24.826550  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:24.854301  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:24.854328  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:24.854334  346554 cri.go:89] found id: ""
	I1002 07:24:24.854341  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:24.854423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.858547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.862285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:24.862407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:24.892024  346554 cri.go:89] found id: ""
	I1002 07:24:24.892048  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.892057  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:24.892067  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:24.892079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:24.993633  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:24.993672  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:25.023967  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:25.023999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:25.088069  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:25.088104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:25.171716  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:25.171754  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:25.211296  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:25.211330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:25.277865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:25.277888  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:25.277901  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:25.305336  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:25.305363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:25.339149  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:25.339311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:25.419370  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:25.419407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:25.452415  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:25.452447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:25.482792  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:25.482824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:28.019833  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:28.031976  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:28.032047  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:28.061518  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.061538  346554 cri.go:89] found id: ""
	I1002 07:24:28.061547  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:28.061610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.065737  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:28.065812  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:28.100250  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.100274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.100280  346554 cri.go:89] found id: ""
	I1002 07:24:28.100287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:28.100347  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.104729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.109130  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:28.109242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:28.136194  346554 cri.go:89] found id: ""
	I1002 07:24:28.136220  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.136229  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:28.136235  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:28.136294  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:28.177728  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.177751  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.177756  346554 cri.go:89] found id: ""
	I1002 07:24:28.177764  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:28.177822  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.182057  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.185909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:28.185984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:28.213081  346554 cri.go:89] found id: ""
	I1002 07:24:28.213104  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.213114  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:28.213120  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:28.213180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:28.242037  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.242061  346554 cri.go:89] found id: ""
	I1002 07:24:28.242070  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:28.242125  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.245909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:28.245982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:28.272643  346554 cri.go:89] found id: ""
	I1002 07:24:28.272688  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.272698  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:28.272708  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:28.272741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:28.368590  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:28.368674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:28.441922  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:28.441993  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:28.442025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.485137  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:28.485174  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.519916  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:28.519949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.547334  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:28.547364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:28.578668  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:28.578698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:28.597024  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:28.597053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.625533  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:28.625562  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.703945  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:28.703983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.782221  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:28.782256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.363217  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:31.375576  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:31.375651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:31.412392  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.412416  346554 cri.go:89] found id: ""
	I1002 07:24:31.412425  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:31.412489  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.416397  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:31.416497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:31.447142  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:31.447172  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.447178  346554 cri.go:89] found id: ""
	I1002 07:24:31.447186  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:31.447245  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.451130  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.454872  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:31.454972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:31.491372  346554 cri.go:89] found id: ""
	I1002 07:24:31.491393  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.491401  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:31.491407  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:31.491464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:31.523581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:31.523606  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.523611  346554 cri.go:89] found id: ""
	I1002 07:24:31.523618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:31.523696  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.527714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.531521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:31.531638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:31.557016  346554 cri.go:89] found id: ""
	I1002 07:24:31.557090  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.557110  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:31.557117  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:31.557180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:31.587792  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:31.587815  346554 cri.go:89] found id: ""
	I1002 07:24:31.587824  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:31.587900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.591474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:31.591544  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:31.621938  346554 cri.go:89] found id: ""
	I1002 07:24:31.622002  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.622025  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:31.622057  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:31.622087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.699830  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:31.699940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:31.731270  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:31.731297  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:31.830036  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:31.830073  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:31.849448  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:31.849489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.887973  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:31.888002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.925845  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:31.925879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.955314  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:31.955344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:32.027448  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:32.027527  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:32.027556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:32.097086  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:32.097123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:32.181841  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:32.181877  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:34.710633  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:34.725897  346554 out.go:203] 
	W1002 07:24:34.728826  346554 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1002 07:24:34.728867  346554 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1002 07:24:34.728877  346554 out.go:285] * Related issues:
	W1002 07:24:34.728892  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1002 07:24:34.728908  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1002 07:24:34.732168  346554 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:19:49 ha-550225 crio[619]: time="2025-10-02T07:19:49.845674437Z" level=info msg="Started container" PID=1394 containerID=3269c04f5498e2befbc42b6cf2cdbe83a291623d3fde767dc07389c7422afd48 description=kube-system/coredns-66bc5c9577-s6dq8/coredns id=566bb378-7524-4452-b1e6-a25280ba5d7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e055873f04c2899609f0c3b597c607526b01fd136aa0e5f79f2676a446255f13
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.208804519Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215218136Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215264529Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215287667Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.22352303Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.223562538Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.223586029Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.23080621Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.230844857Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.230864434Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.236373132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.236409153Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:20:15 ha-550225 conmon[1183]: conmon 48fccb25ba33b3850afc <ninfo>: container 1186 exited with status 1
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.461105809Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5008df2b-58c5-42b1-a1f6-e14a10f1abbb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.46213329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b8ddfc43-aba7-4f99-b91d-97240f3eaf35 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.46331964Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=55bd6811-47fe-4715-9579-6244ca41dc93 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.463596057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.472956017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.47327584Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6958a022ca5d2e537c24f18da644191de8f0c379072dbf05004476abea1680e8/merged/etc/passwd: no such file or directory"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.473326269Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6958a022ca5d2e537c24f18da644191de8f0c379072dbf05004476abea1680e8/merged/etc/group: no such file or directory"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.473692689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.493904849Z" level=info msg="Created container 5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71: kube-system/storage-provisioner/storage-provisioner" id=55bd6811-47fe-4715-9579-6244ca41dc93 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.495150407Z" level=info msg="Starting container: 5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71" id=b45832b0-a0c9-4ad1-8a10-5fba7e2ccb21 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.499183546Z" level=info msg="Started container" PID=1457 containerID=5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71 description=kube-system/storage-provisioner/storage-provisioner id=b45832b0-a0c9-4ad1-8a10-5fba7e2ccb21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc2b31ede15861c2d07fce3991053334dcdd31f17b14021784ac1be8ed7e0b31
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5b2624a029b4c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       2                   bc2b31ede1586       storage-provisioner                 kube-system
	3269c04f5498e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   e055873f04c28       coredns-66bc5c9577-s6dq8            kube-system
	448d4967d9024       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   e934129b46d08       busybox-7b57f96db7-gph4b            default
	8a9ee715e4343       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   edd2550dab874       kindnet-v7wnc                       kube-system
	5051222f30f0a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   3e269f3dd585c       kube-proxy-skqs2                    kube-system
	48fccb25ba33b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   bc2b31ede1586       storage-provisioner                 kube-system
	97a0ea46cf7f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   70fe4e27581bb       coredns-66bc5c9577-7gnh8            kube-system
	0dcd791f01f43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   11                  19a2185d4a1eb       kube-controller-manager-ha-550225   kube-system
	8290015e8c15e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   5 minutes ago       Running             kube-apiserver            10                  b2181fe55e225       kube-apiserver-ha-550225            kube-system
	29394f92b6a36       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   10                  19a2185d4a1eb       kube-controller-manager-ha-550225   kube-system
	5b0c0535da780       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            9                   b2181fe55e225       kube-apiserver-ha-550225            kube-system
	5f7223d3b4009       27aa99ef07bb63db109cae7189f6029203a1ba86e8d201ca72eb836e3cdd0b43   7 minutes ago       Running             kube-vip                  1                   c455a5f1f2468       kube-vip-ha-550225                  kube-system
	43f493b22d959       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      3                   8c156781bf4ef       etcd-ha-550225                      kube-system
	2b4cd729501f6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            2                   b0329f645e59c       kube-scheduler-ha-550225            kube-system
	
	
	==> coredns [3269c04f5498e2befbc42b6cf2cdbe83a291623d3fde767dc07389c7422afd48] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50597 - 50866 "HINFO IN 2471821353559588233.5453610813505731232. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027203243s
	
	
	==> coredns [97a0ea46cf7f751b62a77918089760dd2e292198c9c2fc951fc282e4636ba492] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56369 - 30635 "HINFO IN 7137530019898463004.8479900960678889237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.018878387s
	[INFO] 127.0.0.1:38056 - 50955 "HINFO IN 7137530019898463004.8479900960678889237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041678969s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-550225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_03_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:02:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:24:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:03:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-550225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 804fc56d691a47babcd58cd3553282d3
	  System UUID:                96b9796d-f076-4bf0-ac0e-2eccc9d5873e
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gph4b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-66bc5c9577-7gnh8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 coredns-66bc5c9577-s6dq8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 etcd-ha-550225                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-v7wnc                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-550225             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-550225    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-skqs2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-550225             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-550225                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 21m                    kube-proxy       
	  Normal   Starting                 5m2s                   kube-proxy       
	  Normal   Starting                 22m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 22m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  21m (x8 over 22m)      kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     21m (x8 over 22m)      kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    21m (x8 over 22m)      kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasNoDiskPressure    21m                    kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                    kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21m                    kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   Starting                 21m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 21m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           21m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   RegisteredNode           21m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   NodeReady                21m                    kubelet          Node ha-550225 status is now: NodeReady
	  Normal   RegisteredNode           19m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   Starting                 7m57s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m57s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m57s (x8 over 7m57s)  kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m57s (x8 over 7m57s)  kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m57s (x8 over 7m57s)  kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m48s                  node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	
	
	Name:               ha-550225-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_03_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:03:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:21 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-550225-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 08dcc5805aac4edbab34bc4710db5eef
	  System UUID:                c6a05e31-956b-4e2f-af6e-62090982b7b4
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wbl7l                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-550225-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-n6kwf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-550225-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-550225-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-jkkmq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-550225-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-550225-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   RegisteredNode           21m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           21m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           19m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           5m48s              node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   NodeNotReady             4m58s              node-controller  Node ha-550225-m02 status is now: NodeNotReady
	
	
	Name:               ha-550225-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_04_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:04:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-550225-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 315218fdc78646b99ded6becf46edf67
	  System UUID:                4ea95856-3488-4a4f-b299-e71342dd8d89
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q95k5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-550225-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-2w4k5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-550225-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-550225-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-2k945                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-550225-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-550225-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        19m    kube-proxy       
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  16m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  5m48s  node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  NodeNotReady    4m58s  node-controller  Node ha-550225-m03 status is now: NodeNotReady
	
	
	Name:               ha-550225-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_06_15_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:06:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-550225-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 4bfee30c7b434881a054adc06b7ffd73
	  System UUID:                9c87cedb-25ad-496a-a907-0c95201b1fe7
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2h5qc       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-proxy-gf52r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeHasSufficientMemory  18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-550225-m04 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  RegisteredNode           5m48s              node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeNotReady             4m58s              node-controller  Node ha-550225-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 06:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 06:49] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:03] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:16] overlayfs: idmapped layers are currently not supported
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [43f493b22d959eb4018498d0af4c8a03328857db3567f13cb0ffaee9ec06c00b] <==
	{"level":"warn","ts":"2025-10-02T07:24:48.379341Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.380756Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.388452Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.391612Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.396248Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.406484Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.414671Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.422279Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.428575Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.432625Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.434994Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.443320Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.451531Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.455429Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.458534Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.462623Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.471520Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.480751Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.482889Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.489023Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.492426Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.495388Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.504747Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.515337Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:48.579001Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 07:24:48 up  2:07,  0 user,  load average: 1.53, 1.04, 1.15
	Linux ha-550225 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a9ee715e43431e349cf8c9be623f1a296d01184f3204e6a4a0f8394fc70358e] <==
	I1002 07:24:18.215444       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:28.207379       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:28.207511       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:28.207747       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:28.207827       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:28.207968       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:28.208017       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:28.208188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:28.208240       1 main.go:301] handling current node
	I1002 07:24:38.211259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:38.211291       1 main.go:301] handling current node
	I1002 07:24:38.211307       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:38.211313       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:38.211454       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:38.211461       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:38.211513       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:38.211519       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:48.211187       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:48.211220       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:48.211353       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:48.211359       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:48.211418       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:48.211425       1 main.go:301] handling current node
	I1002 07:24:48.211436       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:48.211441       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5b0c0535da7807f278c4629073d71180fc43a369ddae7136c7ffd515a7e95c6b] <==
	I1002 07:18:00.892979       1 server.go:150] Version: v1.34.1
	I1002 07:18:00.893076       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1002 07:18:02.015138       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1002 07:18:02.015252       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1002 07:18:02.015284       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1002 07:18:02.015315       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1002 07:18:02.015348       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1002 07:18:02.015382       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1002 07:18:02.015415       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1002 07:18:02.015448       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1002 07:18:02.015481       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1002 07:18:02.015512       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1002 07:18:02.015544       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1002 07:18:02.015575       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1002 07:18:02.033014       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:18:02.034577       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1002 07:18:02.035335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1002 07:18:02.045748       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:18:02.056978       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1002 07:18:02.057010       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1002 07:18:02.057337       1 instance.go:239] Using reconciler: lease
	W1002 07:18:02.058416       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:18:22.032470       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:18:22.034569       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1002 07:18:22.058050       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8290015e8c15e01397448ee79ef46f66d0ddd62579c46b3fd334baf073a9d6bc] <==
	I1002 07:18:54.901508       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 07:18:54.914584       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 07:18:54.914862       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:18:54.917776       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:18:54.920456       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 07:18:54.921448       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:18:54.921690       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 07:18:54.935006       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:18:54.935120       1 policy_source.go:240] refreshing policies
	I1002 07:18:54.936177       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:18:54.995047       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:18:54.995073       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:18:55.006144       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 07:18:55.006401       1 aggregator.go:171] initial CRD sync complete...
	I1002 07:18:55.006443       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:18:55.006472       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:18:55.006502       1 cache.go:39] Caches are synced for autoregister controller
	I1002 07:18:55.693729       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:18:55.915859       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 07:18:56.852268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 07:18:56.854341       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:18:56.866097       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:19:00.445840       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:19:00.449414       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:19:00.588914       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0dcd791f01f43325da7d666b2308b7e9e8afd6c81f0dce7b635d6b6e5e8a9df1] <==
	I1002 07:19:00.416685       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 07:19:00.422763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:19:00.422858       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 07:19:00.422891       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 07:19:00.429174       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 07:19:00.430239       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 07:19:00.434548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:19:00.434793       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 07:19:00.434939       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:19:00.434988       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:19:00.435000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:19:00.435011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:19:00.435027       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:19:00.436974       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:19:00.437153       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:19:00.437213       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:19:00.437246       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:19:00.437276       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:19:00.440308       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:19:00.441271       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 07:19:00.447203       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:19:00.447327       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:19:00.447774       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-550225-m04"
	I1002 07:19:50.432665       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-550225-m04"
	I1002 07:19:50.870389       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	
	
	==> kube-controller-manager [29394f92b6a368bb1845ecb24b6cebce9a3e6e6816e60bf240997292037f264a] <==
	I1002 07:18:16.059120       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:18:17.185952       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 07:18:17.185981       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:18:17.187402       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 07:18:17.187586       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 07:18:17.187839       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 07:18:17.187927       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:18:33.066017       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [5051222f30f0ae589e47ad3f24adc858d48fe99da320fc5495aa8189ecc36596] <==
	I1002 07:19:45.951789       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:19:46.028809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:19:46.129896       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:19:46.129933       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:19:46.130000       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:19:46.150308       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:19:46.150378       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:19:46.154018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:19:46.154343       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:19:46.154416       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:19:46.157478       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:19:46.157553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:19:46.157874       1 config.go:200] "Starting service config controller"
	I1002 07:19:46.157918       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:19:46.158250       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:19:46.158295       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:19:46.158742       1 config.go:309] "Starting node config controller"
	I1002 07:19:46.158794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:19:46.158824       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:19:46.258046       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:19:46.258051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:19:46.258406       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2b4cd729501f68e709fb29b74cdf4d89db019e465f669755a276bbd13dfa365d] <==
	E1002 07:17:57.915557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:17:59.343245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:18:17.475604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:18:19.476430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:18:20.523426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:18:20.961075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:18:21.209835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:18:22.175039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:18:23.065717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33332->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:18:23.065828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33338->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:18:23.065904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33346->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:18:23.066085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33356->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:18:23.066195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48896->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:18:23.066285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33302->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:18:23.066377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33316->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:18:23.066451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33400->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:18:23.067303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33366->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:18:23.067355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48888->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:18:23.067419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48872->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:18:23.067516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48892->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:18:23.067591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33382->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:18:50.334725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:18:54.767637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:18:54.767804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 07:18:55.890008       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:19:21 ha-550225 kubelet[753]: E1002 07:19:21.811346     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(f74a25ae-35bd-44b0-84a9-50a5df5dec1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:21 ha-550225 kubelet[753]: E1002 07:19:21.811400     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="f74a25ae-35bd-44b0-84a9-50a5df5dec1d"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.810797     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-gph4b_default(193a390b-ce6f-4e39-afcc-7ee671deb0a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.810843     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-gph4b" podUID="193a390b-ce6f-4e39-afcc-7ee671deb0a1"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.811359     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-s6dq8_kube-system(7626557b-e8fe-419b-b447-994cfa9b0f07): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.811895     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-s6dq8" podUID="7626557b-e8fe-419b-b447-994cfa9b0f07"
	Oct 02 07:19:23 ha-550225 kubelet[753]: E1002 07:19:23.811789     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-v7wnc_kube-system(b011ceef-f3c8-4142-8385-b09113581770): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:23 ha-550225 kubelet[753]: E1002 07:19:23.811826     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-v7wnc" podUID="b011ceef-f3c8-4142-8385-b09113581770"
	Oct 02 07:19:24 ha-550225 kubelet[753]: E1002 07:19:24.810191     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-7gnh8_kube-system(55461d93-6678-4e2e-8b48-7d26628c1cf9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:24 ha-550225 kubelet[753]: E1002 07:19:24.810240     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-7gnh8" podUID="55461d93-6678-4e2e-8b48-7d26628c1cf9"
	Oct 02 07:19:31 ha-550225 kubelet[753]: E1002 07:19:31.812684     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-skqs2_kube-system(d5f2a06e-009a-4c94-aee4-c6d515d1a38b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:31 ha-550225 kubelet[753]: E1002 07:19:31.812750     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-skqs2" podUID="d5f2a06e-009a-4c94-aee4-c6d515d1a38b"
	Oct 02 07:19:32 ha-550225 kubelet[753]: E1002 07:19:32.810908     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(f74a25ae-35bd-44b0-84a9-50a5df5dec1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:32 ha-550225 kubelet[753]: E1002 07:19:32.811030     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="f74a25ae-35bd-44b0-84a9-50a5df5dec1d"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812380     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-s6dq8_kube-system(7626557b-e8fe-419b-b447-994cfa9b0f07): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812427     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-s6dq8" podUID="7626557b-e8fe-419b-b447-994cfa9b0f07"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812402     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-gph4b_default(193a390b-ce6f-4e39-afcc-7ee671deb0a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812917     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-v7wnc_kube-system(b011ceef-f3c8-4142-8385-b09113581770): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.814141     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-v7wnc" podUID="b011ceef-f3c8-4142-8385-b09113581770"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.814168     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-gph4b" podUID="193a390b-ce6f-4e39-afcc-7ee671deb0a1"
	Oct 02 07:19:51 ha-550225 kubelet[753]: E1002 07:19:51.724599     753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d\": container with ID starting with 15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d not found: ID does not exist" containerID="15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d"
	Oct 02 07:19:51 ha-550225 kubelet[753]: I1002 07:19:51.724702     753 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d" err="rpc error: code = NotFound desc = could not find container \"15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d\": container with ID starting with 15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d not found: ID does not exist"
	Oct 02 07:19:51 ha-550225 kubelet[753]: E1002 07:19:51.725359     753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04\": container with ID starting with c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04 not found: ID does not exist" containerID="c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04"
	Oct 02 07:19:51 ha-550225 kubelet[753]: I1002 07:19:51.725398     753 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04" err="rpc error: code = NotFound desc = could not find container \"c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04\": container with ID starting with c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04 not found: ID does not exist"
	Oct 02 07:20:16 ha-550225 kubelet[753]: I1002 07:20:16.460466     753 scope.go:117] "RemoveContainer" containerID="48fccb25ba33b3850afc1ffdf5ca13f71673b1d992497dbcadf93bdbc8bdee4c"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225
helpers_test.go:269: (dbg) Run:  kubectl --context ha-550225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-2x8th
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-550225 describe pod busybox-7b57f96db7-2x8th
helpers_test.go:290: (dbg) kubectl --context ha-550225 describe pod busybox-7b57f96db7-2x8th:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-2x8th
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q7r8b (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q7r8b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  0s    default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (4.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (5.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:309: expected profile "ha-550225" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-550225\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-550225\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-550225\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-
device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimization
s\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-550225
helpers_test.go:243: (dbg) docker inspect ha-550225:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	        "Created": "2025-10-02T07:02:30.539981852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:16:43.830280649Z",
	            "FinishedAt": "2025-10-02T07:16:42.559270036Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c-json.log",
	        "Name": "/ha-550225",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-550225:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-550225",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c",
	                "LowerDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdf030b6c2f20abb33a3234a6644ac5d3af52d540590a5cc0501ddab67511db5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-550225",
	                "Source": "/var/lib/docker/volumes/ha-550225/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-550225",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-550225",
	                "name.minikube.sigs.k8s.io": "ha-550225",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afa0a4e6ee5917c0a800a9abfad94a173555b01d2438c9506474ee7c27ad6564",
	            "SandboxKey": "/var/run/docker/netns/afa0a4e6ee59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-550225": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:f4:60:b8:9c:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87a294cab4b5d50d5f227902c62678f378fbede9275f1d54f0b3de7a1f36e1a0",
	                    "EndpointID": "e0227cbf31cf607a461ab665f3bdb5d5d554f27df511a468e38aecbd366c38c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-550225",
	                        "1c1f8ec53310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-550225 -n ha-550225
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 logs -n 25: (2.336362594s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp testdata/cp-test.txt ha-550225-m04:/home/docker/cp-test.txt                                                             │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m04.txt │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m04_ha-550225.txt                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225.txt                                                 │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m02 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ cp      │ ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt               │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ ssh     │ ha-550225 ssh -n ha-550225-m03 sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:07 UTC │
	│ node    │ ha-550225 node start m02 --alsologtostderr -v 5                                                                                      │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:07 UTC │ 02 Oct 25 07:08 UTC │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │ 02 Oct 25 07:08 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5                                                                                   │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:08 UTC │                     │
	│ node    │ ha-550225 node list --alsologtostderr -v 5                                                                                           │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ node    │ ha-550225 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ stop    │ ha-550225 stop --alsologtostderr -v 5                                                                                                │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │ 02 Oct 25 07:16 UTC │
	│ start   │ ha-550225 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:16 UTC │                     │
	│ node    │ ha-550225 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-550225 │ jenkins │ v1.37.0 │ 02 Oct 25 07:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:16:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:16:43.556654  346554 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:16:43.556900  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.556935  346554 out.go:374] Setting ErrFile to fd 2...
	I1002 07:16:43.556957  346554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:16:43.557253  346554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:16:43.557663  346554 out.go:368] Setting JSON to false
	I1002 07:16:43.558546  346554 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7155,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:16:43.558645  346554 start.go:140] virtualization:  
	I1002 07:16:43.562097  346554 out.go:179] * [ha-550225] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:16:43.565995  346554 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:16:43.566065  346554 notify.go:220] Checking for updates...
	I1002 07:16:43.572511  346554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:16:43.575317  346554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:43.578176  346554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:16:43.580964  346554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:16:43.583787  346554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:16:43.587186  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:43.587749  346554 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:16:43.619258  346554 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:16:43.619425  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.676323  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.665454213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.676450  346554 docker.go:318] overlay module found
	I1002 07:16:43.679463  346554 out.go:179] * Using the docker driver based on existing profile
	I1002 07:16:43.682328  346554 start.go:304] selected driver: docker
	I1002 07:16:43.682357  346554 start.go:924] validating driver "docker" against &{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.682550  346554 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:16:43.682661  346554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:16:43.739766  346554 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:16:43.730208669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:16:43.740206  346554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:16:43.740241  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:43.740306  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:43.740357  346554 start.go:348] cluster config:
	{Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:43.743601  346554 out.go:179] * Starting "ha-550225" primary control-plane node in "ha-550225" cluster
	I1002 07:16:43.746399  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:43.749341  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:43.752288  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:43.752352  346554 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:16:43.752374  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:43.752377  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:43.752484  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:43.752495  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:43.752642  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:43.772750  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:43.772775  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:43.772803  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:43.772827  346554 start.go:360] acquireMachinesLock for ha-550225: {Name:mkc1f009b4f35f6b87d580d72d0a621c44a033f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:43.772899  346554 start.go:364] duration metric: took 46.236µs to acquireMachinesLock for "ha-550225"
	I1002 07:16:43.772922  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:43.772934  346554 fix.go:54] fixHost starting: 
	I1002 07:16:43.773187  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:43.794446  346554 fix.go:112] recreateIfNeeded on ha-550225: state=Stopped err=<nil>
	W1002 07:16:43.794478  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:43.797824  346554 out.go:252] * Restarting existing docker container for "ha-550225" ...
	I1002 07:16:43.797912  346554 cli_runner.go:164] Run: docker start ha-550225
	I1002 07:16:44.052064  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:16:44.071577  346554 kic.go:430] container "ha-550225" state is running.
	I1002 07:16:44.071977  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:44.097000  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:44.097247  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:44.097316  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:44.119603  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:44.120087  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:44.120103  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:44.120661  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57572->127.0.0.1:33188: read: connection reset by peer
	I1002 07:16:47.250760  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.250786  346554 ubuntu.go:182] provisioning hostname "ha-550225"
	I1002 07:16:47.250888  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.268212  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.268525  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.268543  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225 && echo "ha-550225" | sudo tee /etc/hostname
	I1002 07:16:47.408749  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225
	
	I1002 07:16:47.408837  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:47.428229  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:47.428559  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:47.428582  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:47.563394  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:47.563422  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:47.563445  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:47.563480  346554 provision.go:84] configureAuth start
	I1002 07:16:47.563555  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:47.583742  346554 provision.go:143] copyHostCerts
	I1002 07:16:47.583804  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583843  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:47.583865  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:47.583942  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:47.584044  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584067  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:47.584076  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:47.584105  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:47.584165  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584188  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:47.584197  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:47.584232  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:47.584294  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225 san=[127.0.0.1 192.168.49.2 ha-550225 localhost minikube]
	I1002 07:16:49.085710  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:49.085804  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:49.085919  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.102600  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.203033  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:49.203111  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:49.220709  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:49.220773  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 07:16:49.238283  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:49.238380  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:49.255763  346554 provision.go:87] duration metric: took 1.692265184s to configureAuth
	I1002 07:16:49.255832  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:49.256105  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:49.256221  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.273296  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:49.273613  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I1002 07:16:49.273636  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:49.545258  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:49.545281  346554 machine.go:96] duration metric: took 5.448016594s to provisionDockerMachine
	I1002 07:16:49.545292  346554 start.go:293] postStartSetup for "ha-550225" (driver="docker")
	I1002 07:16:49.545335  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:49.545400  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:49.545448  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.562765  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.663440  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:49.667012  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:49.667043  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:49.667055  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:49.667131  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:49.667227  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:49.667243  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:49.667356  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:49.675157  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:49.693566  346554 start.go:296] duration metric: took 148.259083ms for postStartSetup
	I1002 07:16:49.693674  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:49.693733  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.711628  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.808263  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:49.813222  346554 fix.go:56] duration metric: took 6.040285845s for fixHost
	I1002 07:16:49.813250  346554 start.go:83] releasing machines lock for "ha-550225", held for 6.040338171s
	I1002 07:16:49.813321  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:16:49.832086  346554 ssh_runner.go:195] Run: cat /version.json
	I1002 07:16:49.832138  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.832170  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:49.832223  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:16:49.860178  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.874339  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:16:49.958866  346554 ssh_runner.go:195] Run: systemctl --version
	I1002 07:16:50.049981  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:50.088401  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:50.093782  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:50.093888  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:50.102679  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:50.102707  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:50.102739  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:50.102790  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:50.119025  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:50.132406  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:50.132508  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:50.147702  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:50.161840  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:50.285662  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:16:50.412243  346554 docker.go:234] disabling docker service ...
	I1002 07:16:50.412358  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:16:50.429880  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:16:50.443435  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:16:50.570143  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:16:50.705200  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:16:50.718349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:16:50.732391  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:16:50.732489  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.741688  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:16:50.741842  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.751301  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.760089  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.769286  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:16:50.777484  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.786723  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.795606  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:16:50.804393  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:16:50.812287  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:16:50.819774  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:50.940841  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:16:51.084825  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:16:51.084933  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:16:51.088952  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:16:51.089022  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:16:51.093255  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:16:51.121871  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:16:51.122035  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.151306  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:16:51.186151  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:16:51.188993  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:16:51.205719  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:16:51.209600  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.219722  346554 kubeadm.go:883] updating cluster {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:16:51.219870  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:51.219932  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.259348  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.259373  346554 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:16:51.259435  346554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:16:51.285823  346554 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:16:51.285850  346554 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:16:51.285860  346554 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 07:16:51.285975  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:16:51.286067  346554 ssh_runner.go:195] Run: crio config
	I1002 07:16:51.349840  346554 cni.go:84] Creating CNI manager for ""
	I1002 07:16:51.349864  346554 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1002 07:16:51.349907  346554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:16:51.349941  346554 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-550225 NodeName:ha-550225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:16:51.350123  346554 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-550225"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:16:51.350149  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:16:51.350220  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:16:51.362455  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:51.362590  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:16:51.362683  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:16:51.370716  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:16:51.370824  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 07:16:51.378562  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 07:16:51.392384  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:16:51.405890  346554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1002 07:16:51.418852  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:16:51.431748  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:16:51.435456  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:16:51.445200  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:16:51.564279  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:16:51.580309  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.2
	I1002 07:16:51.580335  346554 certs.go:195] generating shared ca certs ...
	I1002 07:16:51.580352  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:51.580577  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:16:51.580643  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:16:51.580658  346554 certs.go:257] generating profile certs ...
	I1002 07:16:51.580760  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:16:51.580851  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.bf5122aa
	I1002 07:16:51.580915  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:16:51.580931  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:16:51.580960  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:16:51.580981  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:16:51.581001  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:16:51.581029  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:16:51.581060  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:16:51.581082  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:16:51.581099  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:16:51.581172  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:16:51.581223  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:16:51.581238  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:16:51.581269  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:16:51.581323  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:16:51.581355  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:16:51.581425  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:51.581476  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.581497  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.581511  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.582046  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:16:51.608528  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:16:51.630032  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:16:51.651693  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:16:51.672816  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:16:51.694334  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:16:51.713045  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:16:51.734929  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:16:51.759074  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:16:51.783798  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:16:51.810129  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:16:51.829572  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:16:51.844038  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:16:51.850521  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:16:51.859107  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863052  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.863200  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:16:51.905139  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:16:51.915686  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:16:51.924646  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928631  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.928697  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:16:51.970474  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:16:51.979037  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:16:51.988282  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992329  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:16:51.992400  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:16:52.034608  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:16:52.043437  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:16:52.047807  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:16:52.090171  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:16:52.132189  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:16:52.173672  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:16:52.215246  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:16:52.259493  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:16:52.303359  346554 kubeadm.go:400] StartCluster: {Name:ha-550225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:16:52.303541  346554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:16:52.303637  346554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:16:52.411948  346554 cri.go:89] found id: ""
	I1002 07:16:52.412087  346554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:16:52.423926  346554 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:16:52.423985  346554 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:16:52.424072  346554 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:16:52.435971  346554 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:16:52.436519  346554 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-550225" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.436691  346554 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "ha-550225" cluster setting kubeconfig missing "ha-550225" context setting]
	I1002 07:16:52.436999  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.437624  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:16:52.438178  346554 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:16:52.438372  346554 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:16:52.438396  346554 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:16:52.438439  346554 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:16:52.438479  346554 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:16:52.438242  346554 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 07:16:52.438946  346554 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:16:52.453843  346554 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 07:16:52.453908  346554 kubeadm.go:601] duration metric: took 29.902711ms to restartPrimaryControlPlane
	I1002 07:16:52.454041  346554 kubeadm.go:402] duration metric: took 150.691034ms to StartCluster
	I1002 07:16:52.454081  346554 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.454172  346554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:16:52.454859  346554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:16:52.455192  346554 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:16:52.455245  346554 start.go:241] waiting for startup goroutines ...
	I1002 07:16:52.455279  346554 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:16:52.455778  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.480332  346554 out.go:179] * Enabled addons: 
	I1002 07:16:52.484238  346554 addons.go:514] duration metric: took 28.941955ms for enable addons: enabled=[]
	I1002 07:16:52.484336  346554 start.go:246] waiting for cluster config update ...
	I1002 07:16:52.484369  346554 start.go:255] writing updated cluster config ...
	I1002 07:16:52.488274  346554 out.go:203] 
	I1002 07:16:52.492458  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:52.492645  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.496127  346554 out.go:179] * Starting "ha-550225-m02" control-plane node in "ha-550225" cluster
	I1002 07:16:52.499195  346554 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:16:52.502435  346554 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:16:52.505497  346554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:16:52.505566  346554 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:16:52.505677  346554 cache.go:58] Caching tarball of preloaded images
	I1002 07:16:52.505807  346554 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:16:52.505838  346554 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:16:52.506003  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:52.530361  346554 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:16:52.530380  346554 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:16:52.530392  346554 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:16:52.530415  346554 start.go:360] acquireMachinesLock for ha-550225-m02: {Name:mk11ef625bc214163cbeacdb736ddec4214a8374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:16:52.530475  346554 start.go:364] duration metric: took 37.3µs to acquireMachinesLock for "ha-550225-m02"
	I1002 07:16:52.530499  346554 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:16:52.530506  346554 fix.go:54] fixHost starting: m02
	I1002 07:16:52.530790  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:52.559198  346554 fix.go:112] recreateIfNeeded on ha-550225-m02: state=Stopped err=<nil>
	W1002 07:16:52.559226  346554 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:16:52.563143  346554 out.go:252] * Restarting existing docker container for "ha-550225-m02" ...
	I1002 07:16:52.563247  346554 cli_runner.go:164] Run: docker start ha-550225-m02
	I1002 07:16:52.985736  346554 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:16:53.019972  346554 kic.go:430] container "ha-550225-m02" state is running.
	I1002 07:16:53.020350  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:53.045172  346554 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/config.json ...
	I1002 07:16:53.045437  346554 machine.go:93] provisionDockerMachine start ...
	I1002 07:16:53.045501  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:53.087166  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:53.087519  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:53.087528  346554 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:16:53.088138  346554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45188->127.0.0.1:33193: read: connection reset by peer
	I1002 07:16:56.311713  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.311782  346554 ubuntu.go:182] provisioning hostname "ha-550225-m02"
	I1002 07:16:56.311878  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.344609  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.344917  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.344929  346554 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-550225-m02 && echo "ha-550225-m02" | sudo tee /etc/hostname
	I1002 07:16:56.639669  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-550225-m02
	
	I1002 07:16:56.639788  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:56.668649  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:56.668967  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:56.668991  346554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-550225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-550225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-550225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:16:56.892812  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:16:56.892848  346554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:16:56.892865  346554 ubuntu.go:190] setting up certificates
	I1002 07:16:56.892886  346554 provision.go:84] configureAuth start
	I1002 07:16:56.892966  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:56.931268  346554 provision.go:143] copyHostCerts
	I1002 07:16:56.931313  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931346  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:16:56.931357  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:16:56.931436  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:16:56.931520  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931541  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:16:56.931548  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:16:56.931576  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:16:56.931619  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931640  346554 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:16:56.931645  346554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:16:56.931673  346554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:16:56.931727  346554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.ha-550225-m02 san=[127.0.0.1 192.168.49.3 ha-550225-m02 localhost minikube]
	I1002 07:16:57.380087  346554 provision.go:177] copyRemoteCerts
	I1002 07:16:57.380161  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:16:57.380209  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.399377  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:57.503607  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 07:16:57.503674  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:16:57.534864  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 07:16:57.534935  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 07:16:57.579624  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 07:16:57.579686  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:16:57.613798  346554 provision.go:87] duration metric: took 720.891298ms to configureAuth
	I1002 07:16:57.613866  346554 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:16:57.614125  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:16:57.614268  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:57.655334  346554 main.go:141] libmachine: Using SSH client type: native
	I1002 07:16:57.655649  346554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I1002 07:16:57.655669  346554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:16:58.296218  346554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:16:58.296241  346554 machine.go:96] duration metric: took 5.250794733s to provisionDockerMachine
	I1002 07:16:58.296266  346554 start.go:293] postStartSetup for "ha-550225-m02" (driver="docker")
	I1002 07:16:58.296279  346554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:16:58.296361  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:16:58.296407  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.334246  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.454625  346554 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:16:58.462912  346554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:16:58.462946  346554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:16:58.462957  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:16:58.463024  346554 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:16:58.463132  346554 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:16:58.463146  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /etc/ssl/certs/2943572.pem
	I1002 07:16:58.463245  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:16:58.476350  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:16:58.502934  346554 start.go:296] duration metric: took 206.651168ms for postStartSetup
	I1002 07:16:58.503074  346554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:16:58.503140  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.541010  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.704044  346554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:16:58.724725  346554 fix.go:56] duration metric: took 6.194210695s for fixHost
	I1002 07:16:58.724751  346554 start.go:83] releasing machines lock for "ha-550225-m02", held for 6.194264053s
	I1002 07:16:58.724830  346554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m02
	I1002 07:16:58.757236  346554 out.go:179] * Found network options:
	I1002 07:16:58.760259  346554 out.go:179]   - NO_PROXY=192.168.49.2
	W1002 07:16:58.763701  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	W1002 07:16:58.763752  346554 proxy.go:120] fail to check proxy env: Error ip not in block
	I1002 07:16:58.763820  346554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:16:58.763852  346554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:16:58.763870  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.763907  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m02
	I1002 07:16:58.799805  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:58.800051  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m02/id_rsa Username:docker}
	I1002 07:16:59.297366  346554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:16:59.320265  346554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:16:59.320354  346554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:16:59.335012  346554 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:16:59.335039  346554 start.go:495] detecting cgroup driver to use...
	I1002 07:16:59.335070  346554 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:16:59.335161  346554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:16:59.357972  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:16:59.378445  346554 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:16:59.378521  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:16:59.402692  346554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:16:59.423049  346554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:16:59.777657  346554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:17:00.088553  346554 docker.go:234] disabling docker service ...
	I1002 07:17:00.088656  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:17:00.130593  346554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:17:00.210008  346554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:17:00.633988  346554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:17:01.021589  346554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:17:01.054167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:17:01.092894  346554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:17:01.092980  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.111830  346554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:17:01.111928  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.139965  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.151897  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.168595  346554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:17:01.186410  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.204646  346554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.221763  346554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:17:01.236700  346554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:17:01.257944  346554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:17:01.272835  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:17:01.618372  346554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:18:32.051852  346554 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.433435555s)
	I1002 07:18:32.051878  346554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:18:32.051938  346554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:18:32.056156  346554 start.go:563] Will wait 60s for crictl version
	I1002 07:18:32.056222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:18:32.060117  346554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:18:32.088770  346554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:18:32.088860  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.119432  346554 ssh_runner.go:195] Run: crio --version
	I1002 07:18:32.154051  346554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:18:32.156909  346554 out.go:179]   - env NO_PROXY=192.168.49.2
	I1002 07:18:32.159957  346554 cli_runner.go:164] Run: docker network inspect ha-550225 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:18:32.177164  346554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:18:32.181230  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:32.191471  346554 mustload.go:65] Loading cluster: ha-550225
	I1002 07:18:32.191729  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:32.191999  346554 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:18:32.209130  346554 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:18:32.209416  346554 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225 for IP: 192.168.49.3
	I1002 07:18:32.209433  346554 certs.go:195] generating shared ca certs ...
	I1002 07:18:32.209448  346554 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:18:32.209574  346554 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:18:32.209622  346554 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:18:32.209635  346554 certs.go:257] generating profile certs ...
	I1002 07:18:32.209712  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key
	I1002 07:18:32.209761  346554 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key.e172f685
	I1002 07:18:32.209802  346554 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key
	I1002 07:18:32.209816  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 07:18:32.209829  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 07:18:32.209843  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 07:18:32.209855  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 07:18:32.209869  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 07:18:32.209883  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 07:18:32.209898  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 07:18:32.209908  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 07:18:32.209964  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:18:32.209998  346554 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:18:32.210010  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:18:32.210033  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:18:32.210061  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:18:32.210089  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:18:32.210137  346554 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:18:32.210168  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.210187  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem -> /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.210198  346554 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.210261  346554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:18:32.227689  346554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:18:32.315413  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1002 07:18:32.319445  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1002 07:18:32.328111  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1002 07:18:32.331777  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1002 07:18:32.340081  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1002 07:18:32.343746  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1002 07:18:32.351558  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1002 07:18:32.354911  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1002 07:18:32.362878  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1002 07:18:32.366632  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1002 07:18:32.374581  346554 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1002 07:18:32.378281  346554 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1002 07:18:32.386552  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:18:32.405394  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:18:32.422759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:18:32.440360  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:18:32.457759  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 07:18:32.475843  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:18:32.493288  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:18:32.510289  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:18:32.527991  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:18:32.545549  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:18:32.562952  346554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:18:32.580383  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1002 07:18:32.593477  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1002 07:18:32.606933  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1002 07:18:32.619772  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1002 07:18:32.634020  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1002 07:18:32.646873  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1002 07:18:32.659836  346554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1002 07:18:32.673417  346554 ssh_runner.go:195] Run: openssl version
	I1002 07:18:32.679719  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:18:32.688081  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692003  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.692135  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:18:32.733286  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:18:32.741334  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:18:32.749624  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753431  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.753505  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:18:32.794364  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:18:32.802247  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:18:32.810290  346554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813847  346554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.813927  346554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:18:32.854739  346554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:18:32.862471  346554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:18:32.866281  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:18:32.907787  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:18:32.948617  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:18:32.989448  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:18:33.030881  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:18:33.074016  346554 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:18:33.117026  346554 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1002 07:18:33.117170  346554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-550225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-550225 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:18:33.117220  346554 kube-vip.go:115] generating kube-vip config ...
	I1002 07:18:33.117288  346554 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 07:18:33.133837  346554 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:18:33.133931  346554 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1002 07:18:33.134029  346554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:18:33.142503  346554 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:18:33.142627  346554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1002 07:18:33.150436  346554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 07:18:33.163196  346554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:18:33.176800  346554 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1002 07:18:33.191119  346554 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 07:18:33.195012  346554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:18:33.205076  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.339361  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.353170  346554 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:18:33.353495  346554 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:18:33.359500  346554 out.go:179] * Verifying Kubernetes components...
	I1002 07:18:33.362288  346554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:18:33.491257  346554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:18:33.505467  346554 kapi.go:59] client config for ha-550225: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/ha-550225/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1002 07:18:33.505560  346554 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1002 07:18:33.505989  346554 node_ready.go:35] waiting up to 6m0s for node "ha-550225-m02" to be "Ready" ...
	W1002 07:18:35.506749  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:38.010468  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:40.016084  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:42.506872  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:44.507212  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:47.007659  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:49.506544  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 07:18:51.506605  346554 node_ready.go:55] error getting node "ha-550225-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-550225-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 07:18:54.785251  346554 node_ready.go:49] node "ha-550225-m02" is "Ready"
	I1002 07:18:54.785285  346554 node_ready.go:38] duration metric: took 21.279267345s for node "ha-550225-m02" to be "Ready" ...
	I1002 07:18:54.785300  346554 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:18:54.785382  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.286257  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:55.786278  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.285480  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:56.785495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.286432  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:57.786472  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.285596  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:58.786260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.286148  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:18:59.785674  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.286401  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:00.786468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.286310  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:01.786133  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.285476  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:02.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.285578  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:03.785477  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.285835  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:04.786152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.285495  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:05.785558  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.285602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:06.785496  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.286468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:07.786358  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.286294  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:08.786349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.286208  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:09.786292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.285577  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:10.785589  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.286341  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:11.785523  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.286415  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:12.786007  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.286205  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:13.786328  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.285849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:14.786397  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.285488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:15.785431  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.285445  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:16.785468  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.285527  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:17.785637  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.285535  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:18.786137  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.286152  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:19.786052  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:20.785522  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.285716  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:21.786849  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.286372  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:22.786418  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.286092  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:23.786120  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.285506  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:24.785439  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.286469  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:25.785780  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.285507  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:26.785611  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.286260  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:27.785499  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.285509  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:28.785521  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.285762  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:29.786049  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.286329  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:30.785543  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.285473  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:31.786013  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.285818  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:32.785931  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.285557  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:33.786122  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:33.786216  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:33.819648  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:33.819668  346554 cri.go:89] found id: ""
	I1002 07:19:33.819678  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:33.819746  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.823889  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:33.823960  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:33.855251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:33.855272  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:33.855277  346554 cri.go:89] found id: ""
	I1002 07:19:33.855285  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:33.855351  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.858992  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.862888  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:33.862975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:33.894144  346554 cri.go:89] found id: ""
	I1002 07:19:33.894169  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.894178  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:33.894184  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:33.894243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:33.921104  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:33.921125  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:33.921130  346554 cri.go:89] found id: ""
	I1002 07:19:33.921137  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:33.921194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.925016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.928536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:33.928631  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:33.961082  346554 cri.go:89] found id: ""
	I1002 07:19:33.961111  346554 logs.go:282] 0 containers: []
	W1002 07:19:33.961121  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:33.961127  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:33.961187  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:33.993876  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:33.993901  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:33.993906  346554 cri.go:89] found id: ""
	I1002 07:19:33.993916  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:33.993979  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:33.999741  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:34.004783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:34.004869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:34.034228  346554 cri.go:89] found id: ""
	I1002 07:19:34.034256  346554 logs.go:282] 0 containers: []
	W1002 07:19:34.034265  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:34.034275  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:34.034288  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:34.096737  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:34.096779  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:34.132301  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:34.132339  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:34.182701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:34.182737  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:34.217015  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:34.217044  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:34.232712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:34.232741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:34.652633  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:34.643757    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.644504    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.646352    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647072    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:34.647911    1434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:34.652655  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:34.652669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:34.681086  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:34.681118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:34.708033  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:34.708062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:34.793299  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:34.793407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:34.848620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:34.848649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:34.948533  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:34.948572  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.477483  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:37.488961  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:37.489035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:37.518325  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:37.518349  346554 cri.go:89] found id: ""
	I1002 07:19:37.518358  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:37.518419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.522140  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:37.522269  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:37.549073  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.549093  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.549098  346554 cri.go:89] found id: ""
	I1002 07:19:37.549105  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:37.549190  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.552869  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.556417  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:37.556497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:37.589096  346554 cri.go:89] found id: ""
	I1002 07:19:37.589122  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.589130  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:37.589137  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:37.589199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:37.615330  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:37.615354  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.615360  346554 cri.go:89] found id: ""
	I1002 07:19:37.615367  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:37.615424  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.619166  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.622673  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:37.622742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:37.648426  346554 cri.go:89] found id: ""
	I1002 07:19:37.648458  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.648467  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:37.648474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:37.648536  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:37.676515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:37.676536  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:37.676541  346554 cri.go:89] found id: ""
	I1002 07:19:37.676549  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:37.676605  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.680280  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:37.684478  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:37.684552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:37.710689  346554 cri.go:89] found id: ""
	I1002 07:19:37.710713  346554 logs.go:282] 0 containers: []
	W1002 07:19:37.710722  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:37.710731  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:37.710741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:37.807134  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:37.807171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:37.877814  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:37.869236    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.869721    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871280    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.871668    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:37.873245    1549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:37.877839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:37.877853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:37.920820  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:37.920854  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:37.956765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:37.956802  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:37.985482  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:37.985510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:38.017517  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:38.017548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:38.100846  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:38.100884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:38.136290  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:38.136318  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:38.151732  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:38.151763  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:38.177792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:38.177822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:38.229226  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:38.229260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.756410  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:40.767378  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:40.767448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:40.799187  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:40.799205  346554 cri.go:89] found id: ""
	I1002 07:19:40.799213  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:40.799268  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.804369  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:40.804454  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:40.830559  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:40.830628  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:40.830652  346554 cri.go:89] found id: ""
	I1002 07:19:40.830679  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:40.830771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.835205  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.839714  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:40.839827  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:40.867014  346554 cri.go:89] found id: ""
	I1002 07:19:40.867039  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.867048  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:40.867054  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:40.867141  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:40.905810  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:40.905829  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:40.905835  346554 cri.go:89] found id: ""
	I1002 07:19:40.905842  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:40.905898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.909648  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.913397  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:40.913471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:40.940488  346554 cri.go:89] found id: ""
	I1002 07:19:40.940511  346554 logs.go:282] 0 containers: []
	W1002 07:19:40.940520  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:40.940526  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:40.940585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:40.968408  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:40.968429  346554 cri.go:89] found id: "279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:40.968439  346554 cri.go:89] found id: ""
	I1002 07:19:40.968447  346554 logs.go:282] 2 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521]
	I1002 07:19:40.968503  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.972336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:40.976070  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:40.976163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:41.010288  346554 cri.go:89] found id: ""
	I1002 07:19:41.010318  346554 logs.go:282] 0 containers: []
	W1002 07:19:41.010328  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:41.010338  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:41.010353  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:41.058706  346554 logs.go:123] Gathering logs for kube-controller-manager [279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521] ...
	I1002 07:19:41.058741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 279cadba63b424ce78cba84fce66f98c6f404c3addace2fc31fddbb2d5872521"
	I1002 07:19:41.085223  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:41.085252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:41.117537  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:41.117564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:41.218224  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:41.218265  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:41.234686  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:41.234727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:41.270240  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:41.270276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:41.321885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:41.321922  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:41.350649  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:41.350684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:41.382710  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:41.382740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:41.465872  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:41.465911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:41.547196  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:41.537685    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539123    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.539741    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.541682    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:41.542291    1758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:41.547220  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:41.547234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.074126  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:44.087746  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:44.087861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:44.116198  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.116223  346554 cri.go:89] found id: ""
	I1002 07:19:44.116232  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:44.116290  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.120227  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:44.120325  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:44.146916  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.146943  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.146948  346554 cri.go:89] found id: ""
	I1002 07:19:44.146955  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:44.147009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.151266  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.155925  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:44.156012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:44.190430  346554 cri.go:89] found id: ""
	I1002 07:19:44.190458  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.190467  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:44.190473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:44.190529  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:44.219366  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.219387  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.219392  346554 cri.go:89] found id: ""
	I1002 07:19:44.219400  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:44.219455  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.223324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.226924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:44.227000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:44.252543  346554 cri.go:89] found id: ""
	I1002 07:19:44.252566  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.252576  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:44.252583  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:44.252650  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:44.280466  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.280489  346554 cri.go:89] found id: ""
	I1002 07:19:44.280498  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:44.280559  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:44.284050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:44.284122  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:44.314223  346554 cri.go:89] found id: ""
	I1002 07:19:44.314250  346554 logs.go:282] 0 containers: []
	W1002 07:19:44.314259  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:44.314269  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:44.314304  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:44.340933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:44.340965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:44.377320  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:44.377352  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:44.411349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:44.411377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:44.516647  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:44.516695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:44.585736  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:44.578237    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.578651    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580147    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.580498    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:44.581966    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:44.585771  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:44.585785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:44.629867  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:44.629909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:44.681709  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:44.681750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:44.710536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:44.710566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:44.801698  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:44.801744  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:44.834146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:44.834175  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:47.351602  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:47.362458  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:47.362546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:47.391769  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.391792  346554 cri.go:89] found id: ""
	I1002 07:19:47.391802  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:47.391863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.395882  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:47.395971  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:47.428129  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:47.428151  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.428156  346554 cri.go:89] found id: ""
	I1002 07:19:47.428164  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:47.428225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.432313  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.436344  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:47.436415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:47.464208  346554 cri.go:89] found id: ""
	I1002 07:19:47.464230  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.464238  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:47.464244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:47.464302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:47.494674  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.494731  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:47.494773  346554 cri.go:89] found id: ""
	I1002 07:19:47.494800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:47.494885  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.499610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.503658  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:47.503779  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:47.532490  346554 cri.go:89] found id: ""
	I1002 07:19:47.532517  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.532527  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:47.532534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:47.532599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:47.565084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:47.565122  346554 cri.go:89] found id: ""
	I1002 07:19:47.565131  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:47.565231  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:47.569404  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:47.569483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:47.597243  346554 cri.go:89] found id: ""
	I1002 07:19:47.597266  346554 logs.go:282] 0 containers: []
	W1002 07:19:47.597275  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:47.597284  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:47.597294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:47.693710  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:47.693748  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:47.771715  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:47.763458    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.764216    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.765967    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.766445    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:47.768080    1980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:47.771739  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:47.771752  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:47.810005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:47.810090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:47.890792  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:47.890824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:47.977230  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:47.977271  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:48.018612  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:48.018643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:48.105364  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:48.105401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:48.124841  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:48.124870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:48.193027  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:48.193069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:48.239251  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:48.239279  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:50.782662  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:50.794011  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:50.794105  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:50.838191  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:50.838216  346554 cri.go:89] found id: ""
	I1002 07:19:50.838225  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:50.838286  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.842655  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:50.842755  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:50.891807  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:50.891833  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:50.891839  346554 cri.go:89] found id: ""
	I1002 07:19:50.891847  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:50.891964  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.899196  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.904048  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:50.904143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:50.939603  346554 cri.go:89] found id: ""
	I1002 07:19:50.939626  346554 logs.go:282] 0 containers: []
	W1002 07:19:50.939635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:50.939641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:50.939735  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:50.971030  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:50.971053  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:50.971059  346554 cri.go:89] found id: ""
	I1002 07:19:50.971067  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:50.971179  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.975612  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:50.980140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:50.980242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:51.025029  346554 cri.go:89] found id: ""
	I1002 07:19:51.025055  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.025064  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:51.025071  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:51.025186  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:51.058743  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.058764  346554 cri.go:89] found id: ""
	I1002 07:19:51.058772  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:51.058862  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:51.064931  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:51.065035  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:51.101431  346554 cri.go:89] found id: ""
	I1002 07:19:51.101462  346554 logs.go:282] 0 containers: []
	W1002 07:19:51.101486  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:51.101498  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:51.101531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:51.126461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:51.126494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:51.217174  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:51.208157    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.208931    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.210624    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.211554    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:51.212602    2120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:51.217200  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:51.217216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:51.279369  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:51.279449  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:51.337216  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:51.337253  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:51.425630  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:51.425669  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:51.528560  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:51.528601  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:51.556690  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:51.556719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:51.600118  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:51.600251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:51.632616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:51.632650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:51.662904  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:51.662935  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.196274  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:54.207476  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:54.207546  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:54.238643  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.238664  346554 cri.go:89] found id: ""
	I1002 07:19:54.238673  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:54.238729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.242382  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:54.242456  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:54.274345  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.274377  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.274383  346554 cri.go:89] found id: ""
	I1002 07:19:54.274390  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:54.274451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.278686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.283146  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:54.283225  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:54.315609  346554 cri.go:89] found id: ""
	I1002 07:19:54.315635  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.315645  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:54.315652  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:54.315718  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:54.343684  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.343709  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.343715  346554 cri.go:89] found id: ""
	I1002 07:19:54.343723  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:54.343789  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.347649  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.351327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:54.351428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:54.380301  346554 cri.go:89] found id: ""
	I1002 07:19:54.380336  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.380346  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:54.380353  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:54.380440  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:54.413081  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:54.413105  346554 cri.go:89] found id: ""
	I1002 07:19:54.413114  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:54.413172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:54.417107  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:54.417181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:54.450903  346554 cri.go:89] found id: ""
	I1002 07:19:54.450930  346554 logs.go:282] 0 containers: []
	W1002 07:19:54.450947  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:54.450957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:54.450972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:54.551509  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:54.551550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:19:54.567991  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:54.568018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:54.641344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:54.632782    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.633432    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635278    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.635893    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:54.637542    2262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:54.641366  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:54.641403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:54.677557  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:54.677592  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:54.742382  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:54.742417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:54.830648  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:54.830681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:54.866699  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:54.866727  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:54.893138  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:54.893166  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:54.942885  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:54.942920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:54.977070  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:54.977098  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.528866  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:19:57.540731  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:19:57.540803  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:19:57.571921  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:57.571945  346554 cri.go:89] found id: ""
	I1002 07:19:57.571954  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:19:57.572028  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.575942  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:19:57.576018  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:19:57.604185  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.604219  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.604224  346554 cri.go:89] found id: ""
	I1002 07:19:57.604232  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:19:57.604326  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.608202  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.611833  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:19:57.611912  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:19:57.640401  346554 cri.go:89] found id: ""
	I1002 07:19:57.640431  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.640440  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:19:57.640447  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:19:57.640519  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:19:57.671538  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:57.671560  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:57.671565  346554 cri.go:89] found id: ""
	I1002 07:19:57.671572  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:19:57.671629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.675430  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.679760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:19:57.679837  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:19:57.707483  346554 cri.go:89] found id: ""
	I1002 07:19:57.707511  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.707521  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:19:57.707527  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:19:57.707592  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:19:57.736308  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.736330  346554 cri.go:89] found id: ""
	I1002 07:19:57.736338  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:19:57.736407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:19:57.740334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:19:57.740505  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:19:57.771488  346554 cri.go:89] found id: ""
	I1002 07:19:57.771558  346554 logs.go:282] 0 containers: []
	W1002 07:19:57.771575  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:19:57.771585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:19:57.771599  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:19:57.824974  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:19:57.825013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:19:57.862787  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:19:57.862825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:19:57.891348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:19:57.891374  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:19:57.923682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:19:57.923711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:19:57.996115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:19:57.987953    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.988650    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990229    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.990623    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:19:57.992277    2424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:19:57.996139  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:19:57.996155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:19:58.033126  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:19:58.033198  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:19:58.106377  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:19:58.106415  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:19:58.139224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:19:58.139252  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:19:58.226478  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:19:58.226525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:19:58.331297  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:19:58.331338  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:00.847448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:00.859829  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:00.859905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:00.887965  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:00.888039  346554 cri.go:89] found id: ""
	I1002 07:20:00.888063  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:00.888133  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.892548  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:00.892623  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:00.922567  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:00.922586  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:00.922591  346554 cri.go:89] found id: ""
	I1002 07:20:00.922598  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:00.922653  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.926435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.930250  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:00.930339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:00.959728  346554 cri.go:89] found id: ""
	I1002 07:20:00.959759  346554 logs.go:282] 0 containers: []
	W1002 07:20:00.959769  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:00.959777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:00.959861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:00.988254  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:00.988317  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:00.988338  346554 cri.go:89] found id: ""
	I1002 07:20:00.988365  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:00.988466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.993016  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:00.996699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:00.996818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:01.024791  346554 cri.go:89] found id: ""
	I1002 07:20:01.024815  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.024823  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:01.024849  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:01.024931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:01.056703  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.056728  346554 cri.go:89] found id: ""
	I1002 07:20:01.056737  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:01.056820  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:01.061200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:01.061302  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:01.092652  346554 cri.go:89] found id: ""
	I1002 07:20:01.092680  346554 logs.go:282] 0 containers: []
	W1002 07:20:01.092690  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:01.092701  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:01.092715  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:01.121048  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:01.121084  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:01.227967  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:01.228007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:01.246697  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:01.246728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:01.299528  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:01.299606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:01.329789  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:01.329875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:01.412310  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:01.412348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:01.449621  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:01.449651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:01.528807  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:01.519940    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.520990    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.521913    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523485    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:01.523993    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:01.528832  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:01.528848  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:01.557543  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:01.557575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:01.606902  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:01.607007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.163648  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:04.175704  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:04.175798  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:04.202895  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.202920  346554 cri.go:89] found id: ""
	I1002 07:20:04.202929  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:04.202988  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.206773  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:04.206847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:04.237461  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.237484  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.237490  346554 cri.go:89] found id: ""
	I1002 07:20:04.237497  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:04.237551  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.241192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.244646  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:04.244721  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:04.271145  346554 cri.go:89] found id: ""
	I1002 07:20:04.271172  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.271181  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:04.271188  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:04.271290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:04.301758  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:04.301787  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.301792  346554 cri.go:89] found id: ""
	I1002 07:20:04.301800  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:04.301858  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.305658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.309360  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:04.309437  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:04.339291  346554 cri.go:89] found id: ""
	I1002 07:20:04.339317  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.339339  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:04.339347  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:04.339417  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:04.366771  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.366841  346554 cri.go:89] found id: ""
	I1002 07:20:04.366866  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:04.366961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:04.371032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:04.371213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:04.396810  346554 cri.go:89] found id: ""
	I1002 07:20:04.396889  346554 logs.go:282] 0 containers: []
	W1002 07:20:04.396905  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:04.396916  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:04.396933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:04.414258  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:04.414291  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:04.478315  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:04.478395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:04.536808  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:04.536847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:04.564995  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:04.565025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:04.592902  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:04.592931  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:04.671813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:04.671849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:04.710652  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:04.710684  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:04.820627  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:04.820664  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:04.897187  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:04.884402    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.885229    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.886886    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.887493    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:04.889166    2712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:04.897212  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:04.897229  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:04.936329  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:04.936358  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.496901  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:07.514473  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:07.514547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:07.540993  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.541017  346554 cri.go:89] found id: ""
	I1002 07:20:07.541025  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:07.541109  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.545015  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:07.545090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:07.572646  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.572670  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.572675  346554 cri.go:89] found id: ""
	I1002 07:20:07.572683  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:07.572763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.576707  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.580612  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:07.580684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:07.606885  346554 cri.go:89] found id: ""
	I1002 07:20:07.606909  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.606917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:07.606923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:07.606980  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:07.633971  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.634051  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.634072  346554 cri.go:89] found id: ""
	I1002 07:20:07.634115  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:07.634212  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.638009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.641489  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:07.641558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:07.669226  346554 cri.go:89] found id: ""
	I1002 07:20:07.669252  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.669262  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:07.669269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:07.669328  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:07.697084  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.697110  346554 cri.go:89] found id: ""
	I1002 07:20:07.697119  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:07.697218  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:07.702023  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:07.702125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:07.729244  346554 cri.go:89] found id: ""
	I1002 07:20:07.729270  346554 logs.go:282] 0 containers: []
	W1002 07:20:07.729279  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:07.729289  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:07.729305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:07.774187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:07.774226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:07.840113  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:07.840153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:07.873716  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:07.873757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:07.891261  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:07.891289  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:07.916233  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:07.916263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:07.952299  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:07.952332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:07.986719  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:07.986746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:08.071303  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:08.071345  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:08.108002  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:08.108028  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:08.210536  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:08.210576  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:08.294093  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:08.284651    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286253    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.286944    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.288549    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:08.289239    2866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:10.795316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:10.809081  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:10.809162  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:10.842834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:10.842857  346554 cri.go:89] found id: ""
	I1002 07:20:10.842866  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:10.842923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.846661  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:10.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:10.885119  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:10.885154  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:10.885160  346554 cri.go:89] found id: ""
	I1002 07:20:10.885167  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:10.885227  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.888993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.892673  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:10.892745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:10.919884  346554 cri.go:89] found id: ""
	I1002 07:20:10.919910  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.919920  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:10.919926  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:10.919986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:10.948791  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:10.948813  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:10.948818  346554 cri.go:89] found id: ""
	I1002 07:20:10.948832  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:10.948888  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.952760  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:10.956362  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:10.956465  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:10.984495  346554 cri.go:89] found id: ""
	I1002 07:20:10.984518  346554 logs.go:282] 0 containers: []
	W1002 07:20:10.984528  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:10.984535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:10.984636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:11.017757  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.017840  346554 cri.go:89] found id: ""
	I1002 07:20:11.017854  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:11.017923  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:11.022016  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:11.022121  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:11.049783  346554 cri.go:89] found id: ""
	I1002 07:20:11.049807  346554 logs.go:282] 0 containers: []
	W1002 07:20:11.049816  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:11.049826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:11.049858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:11.130029  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:11.121829    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.122481    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124100    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.124782    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:11.126290    2935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:11.130050  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:11.130065  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:11.158585  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:11.158617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:11.206663  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:11.206698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:11.251780  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:11.251812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:11.320488  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:11.320524  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:11.401025  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:11.401061  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:11.509831  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:11.509925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:11.528908  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:11.528984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:11.560309  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:11.560340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:11.587476  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:11.587505  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.117921  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:14.129181  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:14.129256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:14.155142  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.155165  346554 cri.go:89] found id: ""
	I1002 07:20:14.155174  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:14.155234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.158996  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:14.159072  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:14.187368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.187439  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.187451  346554 cri.go:89] found id: ""
	I1002 07:20:14.187459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:14.187516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.191550  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.195394  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:14.195489  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:14.221702  346554 cri.go:89] found id: ""
	I1002 07:20:14.221731  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.221741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:14.221748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:14.221805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:14.250745  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.250768  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:14.250774  346554 cri.go:89] found id: ""
	I1002 07:20:14.250781  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:14.250840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.254464  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.257656  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:14.257732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:14.287657  346554 cri.go:89] found id: ""
	I1002 07:20:14.287684  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.287693  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:14.287699  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:14.287763  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:14.317647  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.317670  346554 cri.go:89] found id: ""
	I1002 07:20:14.317680  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:14.317738  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:14.321550  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:14.321664  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:14.347420  346554 cri.go:89] found id: ""
	I1002 07:20:14.347445  346554 logs.go:282] 0 containers: []
	W1002 07:20:14.347455  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:14.347465  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:14.347476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:14.428069  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:14.428106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:14.482408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:14.482447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:14.534003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:14.534036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:14.587616  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:14.587652  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:14.615153  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:14.615189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:14.649482  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:14.649517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:14.745400  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:14.745440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:14.765273  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:14.765307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:14.841087  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:14.832238    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.833271    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.834838    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.835677    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:14.837327    3122 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:14.841109  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:14.841123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:14.867206  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:14.867236  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.396729  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:17.407809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:17.407882  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:17.435626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.435649  346554 cri.go:89] found id: ""
	I1002 07:20:17.435667  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:17.435729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.440093  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:17.440173  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:17.481710  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.481732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.481738  346554 cri.go:89] found id: ""
	I1002 07:20:17.481745  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:17.481808  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.488857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.492676  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:17.492748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:17.535179  346554 cri.go:89] found id: ""
	I1002 07:20:17.535251  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.535277  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:17.535317  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:17.535404  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:17.567305  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:17.567330  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:17.567335  346554 cri.go:89] found id: ""
	I1002 07:20:17.567343  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:17.567405  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.572504  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.576436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:17.576540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:17.604459  346554 cri.go:89] found id: ""
	I1002 07:20:17.604489  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.604498  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:17.604504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:17.604568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:17.632230  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.632254  346554 cri.go:89] found id: ""
	I1002 07:20:17.632263  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:17.632352  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:17.636309  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:17.636416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:17.664031  346554 cri.go:89] found id: ""
	I1002 07:20:17.664058  346554 logs.go:282] 0 containers: []
	W1002 07:20:17.664068  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:17.664078  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:17.664090  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:17.690836  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:17.690911  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:17.720348  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:17.720376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:17.752215  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:17.752295  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:17.855749  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:17.855789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:17.872293  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:17.872320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:17.923506  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:17.923540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:17.971187  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:17.971220  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:18.041592  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:18.041630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:18.085650  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:18.085682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:18.171333  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:18.171372  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:18.244409  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:18.236277    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.236822    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238310    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.238776    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:18.240614    3273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:20.746282  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:20.757663  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:20.757743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:20.787729  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:20.787751  346554 cri.go:89] found id: ""
	I1002 07:20:20.787760  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:20.787845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.792330  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:20.792424  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:20.829800  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:20.829824  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:20.829830  346554 cri.go:89] found id: ""
	I1002 07:20:20.829838  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:20.829899  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.833952  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.837642  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:20.837723  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:20.867702  346554 cri.go:89] found id: ""
	I1002 07:20:20.867725  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.867734  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:20.867740  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:20.867830  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:20.908994  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:20.909016  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:20.909022  346554 cri.go:89] found id: ""
	I1002 07:20:20.909029  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:20.909085  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.913045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.916567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:20.916643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:20.947545  346554 cri.go:89] found id: ""
	I1002 07:20:20.947571  346554 logs.go:282] 0 containers: []
	W1002 07:20:20.947581  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:20.947588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:20.947651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:20.980904  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:20.980984  346554 cri.go:89] found id: ""
	I1002 07:20:20.980999  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:20.981082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:20.984909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:20.984982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:21.020855  346554 cri.go:89] found id: ""
	I1002 07:20:21.020878  346554 logs.go:282] 0 containers: []
	W1002 07:20:21.020887  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:21.020896  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:21.020907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:21.117602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:21.117638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:21.192022  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:21.182767    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.183788    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185393    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.185998    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:21.187680    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:21.192043  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:21.192057  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:21.276022  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:21.276060  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:21.308782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:21.308822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:21.396093  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:21.396132  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:21.438867  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:21.438900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:21.463876  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:21.463906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:21.500802  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:21.500843  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:21.550471  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:21.550508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:21.590310  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:21.590349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.119676  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:24.131693  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:24.131783  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:24.163845  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.163870  346554 cri.go:89] found id: ""
	I1002 07:20:24.163879  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:24.163939  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.167667  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:24.167742  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:24.195635  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.195658  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:24.195664  346554 cri.go:89] found id: ""
	I1002 07:20:24.195672  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:24.195731  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.199786  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.204099  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:24.204199  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:24.233690  346554 cri.go:89] found id: ""
	I1002 07:20:24.233716  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.233726  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:24.233733  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:24.233790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:24.262505  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.262565  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.262586  346554 cri.go:89] found id: ""
	I1002 07:20:24.262614  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:24.262691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.266650  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.270417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:24.270511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:24.297687  346554 cri.go:89] found id: ""
	I1002 07:20:24.297713  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.297723  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:24.297729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:24.297790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:24.325175  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.325197  346554 cri.go:89] found id: ""
	I1002 07:20:24.325205  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:24.325284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:24.329310  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:24.329399  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:24.358432  346554 cri.go:89] found id: ""
	I1002 07:20:24.358458  346554 logs.go:282] 0 containers: []
	W1002 07:20:24.358468  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:24.358477  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:24.358489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:24.418997  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:24.419034  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:24.449127  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:24.449155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:24.545814  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:24.545853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:24.561748  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:24.561777  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:24.632202  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:24.623701    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.624508    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626130    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.626462    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:24.628020    3505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:24.632226  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:24.632239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:24.662637  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:24.662668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:24.740789  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:24.740830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:24.773325  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:24.773357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:24.807399  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:24.807428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:24.853933  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:24.853972  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.396082  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:27.406955  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:27.407027  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:27.435147  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.435171  346554 cri.go:89] found id: ""
	I1002 07:20:27.435180  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:27.435238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.440669  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:27.440745  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:27.467109  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.467176  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:27.467196  346554 cri.go:89] found id: ""
	I1002 07:20:27.467205  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:27.467275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.471217  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.474815  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:27.474888  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:27.503111  346554 cri.go:89] found id: ""
	I1002 07:20:27.503136  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.503145  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:27.503152  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:27.503222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:27.540213  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.540253  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.540260  346554 cri.go:89] found id: ""
	I1002 07:20:27.540276  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:27.540359  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.544590  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.548529  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:27.548605  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:27.577677  346554 cri.go:89] found id: ""
	I1002 07:20:27.577746  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.577772  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:27.577798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:27.577892  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:27.607310  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:27.607329  346554 cri.go:89] found id: ""
	I1002 07:20:27.607337  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:27.607393  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:27.611619  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:27.611690  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:27.647844  346554 cri.go:89] found id: ""
	I1002 07:20:27.647872  346554 logs.go:282] 0 containers: []
	W1002 07:20:27.647882  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:27.647892  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:27.647905  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:27.723377  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:27.713686    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.714844    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.715834    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717611    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:27.717950    3620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:27.723400  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:27.723419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:27.750902  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:27.750932  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:27.804228  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:27.804267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:27.866989  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:27.867068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:27.895361  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:27.895393  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:28.004869  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:28.004912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:28.030605  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:28.030637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:28.090494  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:28.090531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:28.120915  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:28.120953  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:28.213702  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:28.213740  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:30.746147  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:30.758010  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:30.758090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:30.789909  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:30.789936  346554 cri.go:89] found id: ""
	I1002 07:20:30.789945  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:30.790004  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.794321  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:30.794407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:30.823421  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:30.823445  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:30.823451  346554 cri.go:89] found id: ""
	I1002 07:20:30.823459  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:30.823520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.827486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.831334  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:30.831416  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:30.857968  346554 cri.go:89] found id: ""
	I1002 07:20:30.857996  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.858005  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:30.858012  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:30.858073  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:30.885972  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:30.885997  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:30.886002  346554 cri.go:89] found id: ""
	I1002 07:20:30.886010  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:30.886074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.891710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.897102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:30.897174  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:30.928917  346554 cri.go:89] found id: ""
	I1002 07:20:30.928944  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.928953  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:30.928960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:30.929079  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:30.957428  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:30.957456  346554 cri.go:89] found id: ""
	I1002 07:20:30.957465  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:30.957524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:30.961555  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:30.961638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:30.991607  346554 cri.go:89] found id: ""
	I1002 07:20:30.991644  346554 logs.go:282] 0 containers: []
	W1002 07:20:30.991654  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:30.991664  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:30.991682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:31.034696  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:31.034732  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:31.095475  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:31.095521  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:31.124509  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:31.124543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:31.164950  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:31.164982  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:31.242438  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:31.232305    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.233259    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.234890    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.236692    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:31.237374    3792 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:31.242461  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:31.242475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:31.288791  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:31.288829  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:31.324555  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:31.324590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:31.358683  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:31.358775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:31.442957  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:31.443002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:31.546184  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:31.546226  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:34.062520  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:34.074346  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:34.074429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:34.104094  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.104116  346554 cri.go:89] found id: ""
	I1002 07:20:34.104124  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:34.104184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.108168  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:34.108242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:34.134780  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.134803  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.134808  346554 cri.go:89] found id: ""
	I1002 07:20:34.134816  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:34.134873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.140158  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.144631  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:34.144709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:34.171174  346554 cri.go:89] found id: ""
	I1002 07:20:34.171197  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.171209  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:34.171216  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:34.171279  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:34.201197  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.201265  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.201279  346554 cri.go:89] found id: ""
	I1002 07:20:34.201289  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:34.201358  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.205487  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.209274  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:34.209371  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:34.236797  346554 cri.go:89] found id: ""
	I1002 07:20:34.236823  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.236832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:34.236839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:34.236899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:34.268130  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.268153  346554 cri.go:89] found id: ""
	I1002 07:20:34.268163  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:34.268221  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:34.272288  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:34.272494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:34.303012  346554 cri.go:89] found id: ""
	I1002 07:20:34.303036  346554 logs.go:282] 0 containers: []
	W1002 07:20:34.303046  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:34.303057  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:34.303069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:34.330987  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:34.331016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:34.409294  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:34.409332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:34.444890  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:34.444921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:34.529848  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:34.521813    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.522492    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.523830    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.524582    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:34.526232    3914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:34.529873  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:34.529887  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:34.576746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:34.576783  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:34.617959  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:34.617994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:34.680077  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:34.680116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:34.709769  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:34.709801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:34.741411  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:34.741440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:34.841059  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:34.841096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.359292  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:37.370946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:37.371032  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:37.399137  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.399162  346554 cri.go:89] found id: ""
	I1002 07:20:37.399171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:37.399230  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.403338  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:37.403412  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:37.430753  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.430777  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.430782  346554 cri.go:89] found id: ""
	I1002 07:20:37.430790  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:37.430846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.434756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.440208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:37.440282  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:37.466624  346554 cri.go:89] found id: ""
	I1002 07:20:37.466708  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.466741  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:37.466763  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:37.466859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:37.494022  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.494043  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:37.494049  346554 cri.go:89] found id: ""
	I1002 07:20:37.494057  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:37.494137  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.498098  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.502412  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:37.502500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:37.535920  346554 cri.go:89] found id: ""
	I1002 07:20:37.535947  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.535956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:37.535963  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:37.536022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:37.562970  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.562994  346554 cri.go:89] found id: ""
	I1002 07:20:37.563004  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:37.563062  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:37.567000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:37.567077  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:37.595796  346554 cri.go:89] found id: ""
	I1002 07:20:37.595823  346554 logs.go:282] 0 containers: []
	W1002 07:20:37.595832  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:37.595842  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:37.595875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:37.622318  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:37.622347  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:37.698567  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:37.698606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:37.730294  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:37.730323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:37.746780  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:37.746819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:37.774051  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:37.774082  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:37.842657  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:37.842692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:37.879058  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:37.879101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:37.958213  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:37.958255  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:38.066523  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:38.066564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:38.140589  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:38.132053    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.132715    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.134486    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.135135    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:38.136775    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:38.140614  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:38.140628  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.668101  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:40.680533  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:40.680613  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:40.709182  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.709201  346554 cri.go:89] found id: ""
	I1002 07:20:40.709217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:40.709275  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.714063  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:40.714131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:40.741940  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:40.741960  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:40.741965  346554 cri.go:89] found id: ""
	I1002 07:20:40.741972  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:40.742030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.746103  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.749819  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:40.749890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:40.779806  346554 cri.go:89] found id: ""
	I1002 07:20:40.779869  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.779893  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:40.779918  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:40.779999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:40.818846  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:40.818910  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:40.818930  346554 cri.go:89] found id: ""
	I1002 07:20:40.818956  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:40.819034  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.825049  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.829111  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:40.829255  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:40.857000  346554 cri.go:89] found id: ""
	I1002 07:20:40.857070  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.857101  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:40.857116  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:40.857204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:40.890997  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:40.891021  346554 cri.go:89] found id: ""
	I1002 07:20:40.891030  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:40.891120  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:40.902062  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:40.902188  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:40.931155  346554 cri.go:89] found id: ""
	I1002 07:20:40.931192  346554 logs.go:282] 0 containers: []
	W1002 07:20:40.931201  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:40.931258  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:40.931282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:40.968238  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:40.968267  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:41.004537  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:41.004577  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:41.077656  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:41.077693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:41.110709  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:41.110738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:41.146808  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:41.146839  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:41.218315  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:41.209116    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.209601    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.211401    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213018    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:41.213363    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:41.218395  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:41.218476  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:41.270106  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:41.270141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:41.300977  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:41.301007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:41.385349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:41.385387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:41.485614  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:41.485658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.002362  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:44.017480  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:44.017558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:44.055626  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.055653  346554 cri.go:89] found id: ""
	I1002 07:20:44.055662  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:44.055736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.059917  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:44.059997  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:44.097033  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:44.097067  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.097072  346554 cri.go:89] found id: ""
	I1002 07:20:44.097079  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:44.097147  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.101257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.105790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:44.105890  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:44.134184  346554 cri.go:89] found id: ""
	I1002 07:20:44.134213  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.134222  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:44.134229  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:44.134316  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:44.172910  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.172972  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.172992  346554 cri.go:89] found id: ""
	I1002 07:20:44.173019  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:44.173087  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.177020  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.181101  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:44.181189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:44.210050  346554 cri.go:89] found id: ""
	I1002 07:20:44.210072  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.210081  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:44.210088  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:44.210148  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:44.236942  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.236966  346554 cri.go:89] found id: ""
	I1002 07:20:44.236975  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:44.237032  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:44.240886  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:44.240968  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:44.267437  346554 cri.go:89] found id: ""
	I1002 07:20:44.267471  346554 logs.go:282] 0 containers: []
	W1002 07:20:44.267482  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:44.267498  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:44.267522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:44.311617  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:44.311650  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:44.371464  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:44.371502  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:44.401657  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:44.401685  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:44.429428  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:44.429458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:44.457332  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:44.457370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:44.542400  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:44.542441  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:44.576729  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:44.576808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:44.671950  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:44.671991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:44.688074  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:44.688102  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:44.772308  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:44.762400    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.763526    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.764141    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766001    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:44.766685    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:44.772331  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:44.772344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.326275  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:47.337461  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:47.337588  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:47.370813  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.370885  346554 cri.go:89] found id: ""
	I1002 07:20:47.370909  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:47.370985  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.375983  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:47.376102  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:47.408952  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.409021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.409046  346554 cri.go:89] found id: ""
	I1002 07:20:47.409075  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:47.409142  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.412894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.416604  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:47.416678  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:47.443724  346554 cri.go:89] found id: ""
	I1002 07:20:47.443746  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.443755  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:47.443761  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:47.443825  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:47.472814  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.472835  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.472840  346554 cri.go:89] found id: ""
	I1002 07:20:47.472848  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:47.472910  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.476853  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.481052  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:47.481125  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:47.527292  346554 cri.go:89] found id: ""
	I1002 07:20:47.527316  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.527325  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:47.527331  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:47.527396  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:47.557465  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.557493  346554 cri.go:89] found id: ""
	I1002 07:20:47.557502  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:47.557573  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:47.561605  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:47.561776  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:47.592217  346554 cri.go:89] found id: ""
	I1002 07:20:47.592251  346554 logs.go:282] 0 containers: []
	W1002 07:20:47.592261  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:47.592270  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:47.592282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:47.609667  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:47.609697  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:47.670961  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:47.670999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:47.701512  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:47.701543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:47.730463  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:47.730493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:47.813379  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:47.804825    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.805487    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.806775    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.807262    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:47.808792    4477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:47.813403  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:47.813417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:47.839632  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:47.839663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:47.890767  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:47.890807  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:47.931484  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:47.931519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:48.013592  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:48.013683  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:48.048341  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:48.048371  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:50.660679  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:50.672098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:50.672208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:50.698977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.699002  346554 cri.go:89] found id: ""
	I1002 07:20:50.699012  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:50.699155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.703120  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:50.703197  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:50.731004  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:50.731030  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.731035  346554 cri.go:89] found id: ""
	I1002 07:20:50.731043  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:50.731134  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.735170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.739036  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:50.739228  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:50.765233  346554 cri.go:89] found id: ""
	I1002 07:20:50.765257  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.765267  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:50.765276  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:50.765337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:50.798825  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:50.798846  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:50.798851  346554 cri.go:89] found id: ""
	I1002 07:20:50.798858  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:50.798922  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.803023  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.806604  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:50.806684  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:50.834561  346554 cri.go:89] found id: ""
	I1002 07:20:50.834595  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.834605  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:50.834612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:50.834685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:50.862616  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:50.862640  346554 cri.go:89] found id: ""
	I1002 07:20:50.862649  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:50.862719  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:50.866512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:50.866591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:50.894801  346554 cri.go:89] found id: ""
	I1002 07:20:50.894874  346554 logs.go:282] 0 containers: []
	W1002 07:20:50.894898  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:50.894927  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:50.894970  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:50.922014  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:50.922093  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:50.963158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:50.963238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:51.041253  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:51.041298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:51.078068  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:51.078373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:51.109345  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:51.109379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:51.143553  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:51.143586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:51.160251  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:51.160287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:51.232331  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:51.222843    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.223585    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226402    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.226914    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:51.228078    4642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:51.232357  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:51.232370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:51.284859  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:51.284891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:51.366726  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:51.366764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:53.965349  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:53.977241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:53.977365  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:54.007342  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.007370  346554 cri.go:89] found id: ""
	I1002 07:20:54.007379  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:54.007452  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.014154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:54.014243  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:54.042738  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.042761  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.042767  346554 cri.go:89] found id: ""
	I1002 07:20:54.042787  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:54.042849  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.047324  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.052426  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:54.052514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:54.092137  346554 cri.go:89] found id: ""
	I1002 07:20:54.092162  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.092171  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:54.092177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:54.092245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:54.123873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.123895  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.123900  346554 cri.go:89] found id: ""
	I1002 07:20:54.123908  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:54.123966  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.128307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.132643  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:54.132764  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:54.167072  346554 cri.go:89] found id: ""
	I1002 07:20:54.167173  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.167197  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:54.167223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:54.167317  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:54.201096  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.201124  346554 cri.go:89] found id: ""
	I1002 07:20:54.201133  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:54.201192  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:54.205200  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:54.205319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:54.232346  346554 cri.go:89] found id: ""
	I1002 07:20:54.232375  346554 logs.go:282] 0 containers: []
	W1002 07:20:54.232384  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:54.232394  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:54.232424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:54.307053  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:54.297800    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.298604    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.300420    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.301180    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:54.302885    4725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:54.307076  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:54.307120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:54.339765  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:54.339797  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:54.389419  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:54.389463  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:54.427898  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:54.427934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:54.459945  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:54.459979  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:54.495013  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:54.495049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:54.593488  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:54.593523  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:54.699166  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:54.699248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:54.715185  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:54.715217  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:54.790047  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:54.790081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.332703  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:20:57.343440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:20:57.343508  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:20:57.371159  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:57.371224  346554 cri.go:89] found id: ""
	I1002 07:20:57.371248  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:20:57.371325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.376379  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:20:57.376455  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:20:57.403394  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:57.403417  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:57.403423  346554 cri.go:89] found id: ""
	I1002 07:20:57.403431  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:20:57.403486  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.407238  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.410942  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:20:57.411033  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:20:57.438995  346554 cri.go:89] found id: ""
	I1002 07:20:57.439020  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.439029  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:20:57.439036  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:20:57.439133  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:20:57.471614  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.471639  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:20:57.471644  346554 cri.go:89] found id: ""
	I1002 07:20:57.471656  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:20:57.471714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.475670  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.479817  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:20:57.479927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:20:57.514129  346554 cri.go:89] found id: ""
	I1002 07:20:57.514152  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.514160  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:20:57.514166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:20:57.514229  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:20:57.540930  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.540954  346554 cri.go:89] found id: ""
	I1002 07:20:57.540963  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:20:57.541019  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:20:57.545166  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:20:57.545246  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:20:57.580607  346554 cri.go:89] found id: ""
	I1002 07:20:57.580633  346554 logs.go:282] 0 containers: []
	W1002 07:20:57.580643  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:20:57.580653  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:20:57.580682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:20:57.662349  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:20:57.662389  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:20:57.761863  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:20:57.761900  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:20:57.830325  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:20:57.830366  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:20:57.856569  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:20:57.856598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:20:57.888135  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:20:57.888164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:20:57.906242  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:20:57.906270  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:20:57.976993  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:20:57.967788    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.968516    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.970387    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.971058    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:20:57.973057    4895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:20:57.977018  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:20:57.977033  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:20:58.011287  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:20:58.011323  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:20:58.063746  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:20:58.063782  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:20:58.114504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:20:58.114539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.655161  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:00.666760  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:00.666847  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:00.699194  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:00.699218  346554 cri.go:89] found id: ""
	I1002 07:21:00.699227  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:00.699283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.703475  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:00.703551  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:00.730837  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:00.730862  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:00.730867  346554 cri.go:89] found id: ""
	I1002 07:21:00.730874  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:00.730933  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.734900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.738704  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:00.738777  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:00.765809  346554 cri.go:89] found id: ""
	I1002 07:21:00.765832  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.765841  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:00.765847  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:00.765903  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:00.806888  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:00.806911  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:00.806916  346554 cri.go:89] found id: ""
	I1002 07:21:00.806924  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:00.806982  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.810980  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.815454  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:00.815527  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:00.843377  346554 cri.go:89] found id: ""
	I1002 07:21:00.843403  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.843413  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:00.843419  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:00.843480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:00.870064  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:00.870084  346554 cri.go:89] found id: ""
	I1002 07:21:00.870094  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:00.870150  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:00.874067  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:00.874142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:00.912375  346554 cri.go:89] found id: ""
	I1002 07:21:00.912400  346554 logs.go:282] 0 containers: []
	W1002 07:21:00.912409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:00.912419  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:00.912437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:01.010660  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:01.010703  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:01.027564  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:01.027589  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:01.108980  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:01.099987    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101432    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.101988    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103531    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:01.103983    5005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:01.109003  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:01.109017  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:01.140899  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:01.140925  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:01.201677  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:01.201719  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:01.249485  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:01.249516  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:01.310648  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:01.310682  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:01.339591  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:01.339668  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:01.368293  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:01.368363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:01.451526  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:01.451565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:03.985004  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:03.995665  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:03.995732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:04.038756  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.038786  346554 cri.go:89] found id: ""
	I1002 07:21:04.038796  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:04.038863  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.042734  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:04.042813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:04.080960  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.080984  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.080990  346554 cri.go:89] found id: ""
	I1002 07:21:04.080998  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:04.081055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.085045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.088904  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:04.088984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:04.116470  346554 cri.go:89] found id: ""
	I1002 07:21:04.116495  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.116504  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:04.116511  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:04.116568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:04.143301  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.143324  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:04.143330  346554 cri.go:89] found id: ""
	I1002 07:21:04.143336  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:04.143392  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.149220  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.156754  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:04.156875  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:04.186088  346554 cri.go:89] found id: ""
	I1002 07:21:04.186115  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.186125  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:04.186131  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:04.186222  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:04.213953  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.213978  346554 cri.go:89] found id: ""
	I1002 07:21:04.213987  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:04.214074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:04.220236  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:04.220339  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:04.249797  346554 cri.go:89] found id: ""
	I1002 07:21:04.249825  346554 logs.go:282] 0 containers: []
	W1002 07:21:04.249834  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:04.249876  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:04.249893  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:04.334427  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:04.334464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:04.365264  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:04.365294  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:04.467641  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:04.467693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:04.495501  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:04.495532  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:04.553841  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:04.553879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:04.590884  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:04.590912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:04.618124  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:04.618157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:04.634781  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:04.634812  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:04.712412  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:04.704035    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.704877    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706460    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.706999    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:04.708596    5191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:04.712440  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:04.712458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:04.772367  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:04.772405  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.313327  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:07.324335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:07.324410  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:07.352343  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.352367  346554 cri.go:89] found id: ""
	I1002 07:21:07.352376  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:07.352456  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.356634  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:07.356705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:07.384754  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.384778  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.384783  346554 cri.go:89] found id: ""
	I1002 07:21:07.384791  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:07.384871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.388840  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.392572  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:07.392672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:07.418573  346554 cri.go:89] found id: ""
	I1002 07:21:07.418605  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.418615  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:07.418622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:07.418681  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:07.450415  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:07.450439  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:07.450445  346554 cri.go:89] found id: ""
	I1002 07:21:07.450466  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:07.450529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.454971  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.459463  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:07.459539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:07.488692  346554 cri.go:89] found id: ""
	I1002 07:21:07.488722  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.488730  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:07.488737  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:07.488799  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:07.520325  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.520350  346554 cri.go:89] found id: ""
	I1002 07:21:07.520359  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:07.520421  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:07.524256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:07.524330  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:07.549519  346554 cri.go:89] found id: ""
	I1002 07:21:07.549540  346554 logs.go:282] 0 containers: []
	W1002 07:21:07.549548  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:07.549558  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:07.549569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:07.643274  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:07.643315  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:07.716156  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:07.708091    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.708893    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710592    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.710902    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:07.712357    5274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:07.716179  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:07.716195  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:07.743950  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:07.743980  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:07.830226  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:07.830266  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:07.847230  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:07.847260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:07.875839  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:07.875908  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:07.937408  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:07.937448  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:07.974391  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:07.974428  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:08.044504  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:08.044544  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:08.085844  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:08.085875  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:10.619391  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:10.631035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:10.631208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:10.664959  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:10.664983  346554 cri.go:89] found id: ""
	I1002 07:21:10.664992  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:10.665070  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.668812  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:10.668884  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:10.695400  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:10.695424  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:10.695430  346554 cri.go:89] found id: ""
	I1002 07:21:10.695438  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:10.695526  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.699317  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.703430  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:10.703524  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:10.728859  346554 cri.go:89] found id: ""
	I1002 07:21:10.728883  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.728892  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:10.728898  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:10.728974  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:10.754882  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:10.754905  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.754911  346554 cri.go:89] found id: ""
	I1002 07:21:10.754918  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:10.754984  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.758686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.762139  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:10.762248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:10.787999  346554 cri.go:89] found id: ""
	I1002 07:21:10.788067  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.788092  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:10.788115  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:10.788204  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:10.814729  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:10.814803  346554 cri.go:89] found id: ""
	I1002 07:21:10.814825  346554 logs.go:282] 1 containers: [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:10.814914  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:10.818388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:10.818483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:10.845398  346554 cri.go:89] found id: ""
	I1002 07:21:10.845424  346554 logs.go:282] 0 containers: []
	W1002 07:21:10.845433  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:10.845443  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:10.845482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:10.873199  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:10.873225  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:10.951572  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:10.951609  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:11.051035  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:11.051118  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:11.130878  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:11.121998    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.122765    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.124521    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.125102    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:11.126722    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:11.130909  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:11.130924  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:11.156885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:11.156920  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:11.211573  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:11.211615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:11.272703  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:11.272742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:11.301304  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:11.301336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:11.342833  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:11.342861  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:11.360176  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:11.360204  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.902061  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:13.915871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:13.915935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:13.954412  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:13.954439  346554 cri.go:89] found id: ""
	I1002 07:21:13.954448  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:13.954513  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.959571  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:13.959655  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:13.994709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:13.994729  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:13.994735  346554 cri.go:89] found id: ""
	I1002 07:21:13.994743  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:13.994797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:13.999427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.003663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:14.003749  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:14.042653  346554 cri.go:89] found id: ""
	I1002 07:21:14.042680  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.042690  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:14.042696  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:14.042757  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:14.087595  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.087615  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.087620  346554 cri.go:89] found id: ""
	I1002 07:21:14.087628  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:14.087688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.092427  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.096855  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:14.096920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:14.126816  346554 cri.go:89] found id: ""
	I1002 07:21:14.126843  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.126852  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:14.126858  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:14.126918  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:14.155318  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:14.155339  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.155344  346554 cri.go:89] found id: ""
	I1002 07:21:14.155351  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:14.155407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.159934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:14.164569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:14.164634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:14.209412  346554 cri.go:89] found id: ""
	I1002 07:21:14.209437  346554 logs.go:282] 0 containers: []
	W1002 07:21:14.209449  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:14.209459  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:14.209471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:14.225995  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:14.226022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:14.263998  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:14.264027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:14.360121  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:14.360159  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:14.407199  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:14.407234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:14.434782  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:14.434814  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:14.521080  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:14.521121  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:14.593104  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:14.593134  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:14.699269  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:14.699308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:14.786512  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:14.774915    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.778879    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.779597    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781358    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:14.781959    5613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:14.786535  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:14.786548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:14.869065  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:14.869109  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:14.900362  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:14.900454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.430222  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:17.442136  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:17.442212  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:17.468618  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:17.468642  346554 cri.go:89] found id: ""
	I1002 07:21:17.468664  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:17.468722  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.472407  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:17.472483  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:17.500441  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:17.500462  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.500468  346554 cri.go:89] found id: ""
	I1002 07:21:17.500475  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:17.500534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.504574  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.511111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:17.511190  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:17.539180  346554 cri.go:89] found id: ""
	I1002 07:21:17.539208  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.539217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:17.539224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:17.539283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:17.567616  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.567641  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:17.567647  346554 cri.go:89] found id: ""
	I1002 07:21:17.567654  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:17.567710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.571727  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.575519  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:17.575603  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:17.601045  346554 cri.go:89] found id: ""
	I1002 07:21:17.601070  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.601079  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:17.601086  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:17.601143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:17.628358  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.628379  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:17.628384  346554 cri.go:89] found id: ""
	I1002 07:21:17.628391  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:17.628479  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.632534  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:17.636208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:17.636286  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:17.662364  346554 cri.go:89] found id: ""
	I1002 07:21:17.662389  346554 logs.go:282] 0 containers: []
	W1002 07:21:17.662398  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:17.662408  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:17.662419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:17.756609  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:17.756643  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:17.772784  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:17.772821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:17.854603  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:17.846770    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.847523    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849095    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.849421    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:17.850951    5717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:17.854625  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:17.854639  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:17.890480  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:17.890513  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:17.955720  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:17.955755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:17.986877  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:17.986906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:18.065618  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:18.065659  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:18.111257  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:18.111287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:18.141121  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:18.141151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:18.202491  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:18.202530  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:18.232094  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:18.232124  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.762758  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:20.773630  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:20.773708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:20.806503  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:20.806533  346554 cri.go:89] found id: ""
	I1002 07:21:20.806542  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:20.806599  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.810265  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:20.810338  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:20.839055  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:20.839105  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:20.839111  346554 cri.go:89] found id: ""
	I1002 07:21:20.839119  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:20.839176  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.843029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.846663  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:20.846743  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:20.875148  346554 cri.go:89] found id: ""
	I1002 07:21:20.875173  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.875183  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:20.875190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:20.875249  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:20.907677  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:20.907701  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:20.907707  346554 cri.go:89] found id: ""
	I1002 07:21:20.907715  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:20.907772  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.911686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.915632  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:20.915707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:20.941873  346554 cri.go:89] found id: ""
	I1002 07:21:20.941899  346554 logs.go:282] 0 containers: []
	W1002 07:21:20.941908  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:20.941915  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:20.941975  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:20.973490  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:20.973515  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:20.973521  346554 cri.go:89] found id: ""
	I1002 07:21:20.973530  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:20.973585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.977414  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:20.981138  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:20.981213  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:21.013505  346554 cri.go:89] found id: ""
	I1002 07:21:21.013533  346554 logs.go:282] 0 containers: []
	W1002 07:21:21.013543  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:21.013553  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:21.013565  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:21.047930  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:21.047959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:21.144461  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:21.144498  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:21.218444  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:21.209931    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.210755    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212333    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.212924    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:21.214549    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:21.218469  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:21.218482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:21.244979  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:21.245010  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:21.273907  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:21.273940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:21.304310  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:21.304341  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:21.383311  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:21.383390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:21.418944  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:21.418976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:21.437126  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:21.437154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:21.499338  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:21.499373  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:21.541388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:21.541424  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.103318  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:24.114524  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:24.114645  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:24.142263  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.142286  346554 cri.go:89] found id: ""
	I1002 07:21:24.142295  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:24.142357  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.146924  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:24.146998  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:24.174920  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.174945  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.174950  346554 cri.go:89] found id: ""
	I1002 07:21:24.174958  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:24.175015  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.179961  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.183781  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:24.183859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:24.213946  346554 cri.go:89] found id: ""
	I1002 07:21:24.213969  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.213978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:24.213985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:24.214044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:24.240875  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.240898  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.240903  346554 cri.go:89] found id: ""
	I1002 07:21:24.240910  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:24.240967  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.244817  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.248504  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:24.248601  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:24.277554  346554 cri.go:89] found id: ""
	I1002 07:21:24.277579  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.277588  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:24.277595  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:24.277675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:24.308411  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.308507  346554 cri.go:89] found id: "843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.308518  346554 cri.go:89] found id: ""
	I1002 07:21:24.308526  346554 logs.go:282] 2 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851]
	I1002 07:21:24.308585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.312514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:24.316209  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:24.316322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:24.352013  346554 cri.go:89] found id: ""
	I1002 07:21:24.352037  346554 logs.go:282] 0 containers: []
	W1002 07:21:24.352047  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:24.352057  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:24.352070  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:24.392888  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:24.392926  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:24.422136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:24.422162  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:24.522148  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:24.522189  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:24.559761  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:24.559789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:24.635577  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:24.626450    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.627161    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.628806    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.629342    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:24.630887    6031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:24.635658  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:24.635688  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:24.664008  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:24.664038  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:24.716205  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:24.716243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:24.776422  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:24.776465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:24.812576  346554 logs.go:123] Gathering logs for kube-controller-manager [843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851] ...
	I1002 07:21:24.812606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 843e8a446c7c211e057ae6a3d15116d5e44d24a608c5c9687236068876e27851"
	I1002 07:21:24.850011  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:24.850051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:24.957619  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:24.957658  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.474346  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:27.486924  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:27.486999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:27.527387  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:27.527411  346554 cri.go:89] found id: ""
	I1002 07:21:27.527419  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:27.527481  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.531347  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:27.531425  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:27.557184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:27.557209  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.557216  346554 cri.go:89] found id: ""
	I1002 07:21:27.557226  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:27.557285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.561185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.564887  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:27.564964  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:27.593958  346554 cri.go:89] found id: ""
	I1002 07:21:27.593984  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.593993  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:27.594000  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:27.594070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:27.624297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:27.624321  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:27.624325  346554 cri.go:89] found id: ""
	I1002 07:21:27.624332  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:27.624390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.628548  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.632313  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:27.632401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:27.658827  346554 cri.go:89] found id: ""
	I1002 07:21:27.658850  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.658858  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:27.658876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:27.658942  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:27.687346  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.687422  346554 cri.go:89] found id: ""
	I1002 07:21:27.687438  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:27.687516  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:27.691438  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:27.691563  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:27.716933  346554 cri.go:89] found id: ""
	I1002 07:21:27.716959  346554 logs.go:282] 0 containers: []
	W1002 07:21:27.716969  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:27.716979  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:27.717019  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:27.817783  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:27.817831  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:27.857490  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:27.857525  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:27.885125  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:27.885157  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:27.918095  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:27.918133  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:27.933988  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:27.934018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:28.004686  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:27.994706    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.995565    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997325    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.997806    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:27.999393    6185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:28.004719  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:28.004734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:28.034260  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:28.034287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:28.093230  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:28.093269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:28.164138  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:28.164177  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:28.195157  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:28.195188  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:30.778568  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:30.789765  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:30.789833  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:30.825174  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:30.825194  346554 cri.go:89] found id: ""
	I1002 07:21:30.825202  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:30.825257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.829729  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:30.829796  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:30.856611  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:30.856632  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:30.856637  346554 cri.go:89] found id: ""
	I1002 07:21:30.856644  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:30.856701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.860561  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.864279  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:30.864353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:30.891192  346554 cri.go:89] found id: ""
	I1002 07:21:30.891217  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.891257  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:30.891269  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:30.891353  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:30.918873  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:30.918892  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:30.918897  346554 cri.go:89] found id: ""
	I1002 07:21:30.918904  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:30.918965  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.922949  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.926830  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:30.926928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:30.953030  346554 cri.go:89] found id: ""
	I1002 07:21:30.953059  346554 logs.go:282] 0 containers: []
	W1002 07:21:30.953068  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:30.953074  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:30.953131  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:30.980458  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:30.980480  346554 cri.go:89] found id: ""
	I1002 07:21:30.980489  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:30.980547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:30.984323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:30.984450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:31.026334  346554 cri.go:89] found id: ""
	I1002 07:21:31.026360  346554 logs.go:282] 0 containers: []
	W1002 07:21:31.026370  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:31.026380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:31.026416  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:31.058391  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:31.058420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:31.116004  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:31.116040  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:31.151060  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:31.151099  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:31.231368  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:31.231406  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:31.332798  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:31.332835  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:31.413678  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:31.405625    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.406285    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.407900    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.408576    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:31.410010    6319 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:31.413705  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:31.413717  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:31.461265  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:31.461299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:31.534946  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:31.534986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:31.562600  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:31.562629  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:31.592876  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:31.592906  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.110078  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:34.121201  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:34.121271  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:34.148533  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.148554  346554 cri.go:89] found id: ""
	I1002 07:21:34.148562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:34.148621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.152503  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:34.152585  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:34.181027  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.181050  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.181056  346554 cri.go:89] found id: ""
	I1002 07:21:34.181063  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:34.181117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.185002  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.189485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:34.189560  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:34.215599  346554 cri.go:89] found id: ""
	I1002 07:21:34.215625  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.215634  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:34.215641  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:34.215699  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:34.241734  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.241763  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.241768  346554 cri.go:89] found id: ""
	I1002 07:21:34.241776  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:34.241832  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.245545  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.248974  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:34.249050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:34.276023  346554 cri.go:89] found id: ""
	I1002 07:21:34.276049  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.276059  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:34.276072  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:34.276132  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:34.303384  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.303407  346554 cri.go:89] found id: ""
	I1002 07:21:34.303415  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:34.303472  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:34.307469  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:34.307539  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:34.340234  346554 cri.go:89] found id: ""
	I1002 07:21:34.340261  346554 logs.go:282] 0 containers: []
	W1002 07:21:34.340271  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:34.340281  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:34.340293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:34.356522  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:34.356550  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:34.394796  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:34.394825  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:34.443502  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:34.443538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:34.474055  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:34.474081  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:34.555556  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:34.555637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:34.658066  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:34.658101  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:34.733631  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:34.724940    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.725631    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.727437    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.728124    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:34.729973    6465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:34.733651  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:34.733665  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:34.784032  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:34.784068  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:34.847736  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:34.847771  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:34.875075  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:34.875172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:37.408950  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:37.421164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:37.421273  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:37.452410  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.452439  346554 cri.go:89] found id: ""
	I1002 07:21:37.452449  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:37.452505  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.456325  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:37.456445  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:37.486317  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.486340  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:37.486346  346554 cri.go:89] found id: ""
	I1002 07:21:37.486353  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:37.486451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.490342  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.494027  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:37.494104  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:37.527183  346554 cri.go:89] found id: ""
	I1002 07:21:37.527257  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.527281  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:37.527305  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:37.527403  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:37.553164  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:37.553189  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:37.553194  346554 cri.go:89] found id: ""
	I1002 07:21:37.553202  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:37.553263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.557191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.560812  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:37.560909  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:37.592768  346554 cri.go:89] found id: ""
	I1002 07:21:37.592837  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.592861  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:37.592887  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:37.592973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:37.619244  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:37.619275  346554 cri.go:89] found id: ""
	I1002 07:21:37.619285  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:37.619382  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:37.622994  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:37.623067  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:37.654796  346554 cri.go:89] found id: ""
	I1002 07:21:37.654833  346554 logs.go:282] 0 containers: []
	W1002 07:21:37.654843  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:37.654853  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:37.654864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:37.735865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:37.735903  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:37.829667  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:37.829705  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:37.906371  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:37.897524    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.898687    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.899551    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901063    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:37.901395    6573 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:37.906396  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:37.906409  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:37.931859  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:37.931891  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:37.982107  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:37.982141  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:38.026363  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:38.026402  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:38.097347  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:38.097387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:38.129911  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:38.129940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:38.174203  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:38.174233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:38.192324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:38.192356  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.723244  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:40.733967  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:40.734044  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:40.761160  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:40.761180  346554 cri.go:89] found id: ""
	I1002 07:21:40.761196  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:40.761257  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.764997  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:40.765082  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:40.793331  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:40.793357  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:40.793376  346554 cri.go:89] found id: ""
	I1002 07:21:40.793385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:40.793441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.799890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.803764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:40.803836  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:40.834660  346554 cri.go:89] found id: ""
	I1002 07:21:40.834686  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.834696  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:40.834702  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:40.834765  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:40.866063  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:40.866087  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:40.866093  346554 cri.go:89] found id: ""
	I1002 07:21:40.866103  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:40.866168  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.870407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.873946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:40.874058  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:40.908301  346554 cri.go:89] found id: ""
	I1002 07:21:40.908367  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.908391  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:40.908417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:40.908494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:40.937896  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:40.937966  346554 cri.go:89] found id: ""
	I1002 07:21:40.937990  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:40.938080  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:40.941880  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:40.941952  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:40.967147  346554 cri.go:89] found id: ""
	I1002 07:21:40.967174  346554 logs.go:282] 0 containers: []
	W1002 07:21:40.967190  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:40.967226  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:40.967238  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:41.061039  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:41.061077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:41.080254  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:41.080282  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:41.108521  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:41.108547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:41.162117  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:41.162154  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:41.233238  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:41.233276  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:41.260363  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:41.260392  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:41.333767  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:41.325094    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.325822    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.326721    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328411    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:41.328796    6744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:41.333840  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:41.333863  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:41.370518  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:41.370556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:41.399620  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:41.399646  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:41.485257  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:41.485299  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.031564  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:44.043423  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:44.043501  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:44.077366  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.077391  346554 cri.go:89] found id: ""
	I1002 07:21:44.077400  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:44.077473  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.082216  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:44.082297  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:44.114495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.114564  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.114585  346554 cri.go:89] found id: ""
	I1002 07:21:44.114612  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:44.114701  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.118699  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.122876  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:44.122955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:44.161976  346554 cri.go:89] found id: ""
	I1002 07:21:44.162003  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.162015  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:44.162021  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:44.162120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:44.190658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.190682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.190688  346554 cri.go:89] found id: ""
	I1002 07:21:44.190695  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:44.190800  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.194562  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.198424  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:44.198514  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:44.224096  346554 cri.go:89] found id: ""
	I1002 07:21:44.224158  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.224181  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:44.224207  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:44.224284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:44.251545  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.251569  346554 cri.go:89] found id: ""
	I1002 07:21:44.251581  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:44.251639  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:44.255354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:44.255428  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:44.282373  346554 cri.go:89] found id: ""
	I1002 07:21:44.282400  346554 logs.go:282] 0 containers: []
	W1002 07:21:44.282409  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:44.282419  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:44.282431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:44.308028  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:44.308062  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:44.363685  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:44.363723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:44.396318  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:44.396349  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:44.442337  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:44.442370  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:44.546740  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:44.546778  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:44.562701  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:44.562734  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:44.638865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:44.629817    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.630563    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632343    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.632894    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:44.634422    6883 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:44.638901  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:44.638934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:44.675050  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:44.675117  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:44.759066  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:44.759108  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:44.789536  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:44.789569  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:47.372747  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:47.384470  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:47.384538  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:47.411456  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.411476  346554 cri.go:89] found id: ""
	I1002 07:21:47.411484  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:47.411538  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.415979  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:47.416052  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:47.441980  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.442000  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.442005  346554 cri.go:89] found id: ""
	I1002 07:21:47.442012  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:47.442071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.446178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.449820  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:47.449889  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:47.480516  346554 cri.go:89] found id: ""
	I1002 07:21:47.480597  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.480614  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:47.480622  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:47.480700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:47.512233  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.512299  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.512321  346554 cri.go:89] found id: ""
	I1002 07:21:47.512347  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:47.512447  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.517986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.522484  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:47.522599  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:47.554391  346554 cri.go:89] found id: ""
	I1002 07:21:47.554459  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.554483  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:47.554509  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:47.554608  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:47.581519  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.581586  346554 cri.go:89] found id: ""
	I1002 07:21:47.581608  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:47.581710  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:47.585885  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:47.585999  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:47.615242  346554 cri.go:89] found id: ""
	I1002 07:21:47.615272  346554 logs.go:282] 0 containers: []
	W1002 07:21:47.615281  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:47.615291  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:47.615322  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:47.635364  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:47.635394  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:47.712651  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:47.703908    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.704731    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.705628    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.706326    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:47.707409    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:47.712678  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:47.712694  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:47.743506  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:47.743536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:47.811148  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:47.811227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:47.870291  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:47.870324  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:47.910224  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:47.910257  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:47.939069  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:47.939155  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:47.964969  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:47.965008  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:48.043117  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:48.043158  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:48.088315  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:48.088344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:50.689757  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:50.700824  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:50.700893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:50.728143  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:50.728166  346554 cri.go:89] found id: ""
	I1002 07:21:50.728175  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:50.728244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.732333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:50.732406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:50.757855  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:50.757880  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:50.757886  346554 cri.go:89] found id: ""
	I1002 07:21:50.757905  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:50.757972  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.762029  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.765976  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:50.766050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:50.799256  346554 cri.go:89] found id: ""
	I1002 07:21:50.799278  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.799287  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:50.799293  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:50.799360  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:50.831950  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:50.831974  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:50.831981  346554 cri.go:89] found id: ""
	I1002 07:21:50.831988  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:50.832045  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.836319  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.840585  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:50.840668  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:50.870390  346554 cri.go:89] found id: ""
	I1002 07:21:50.870416  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.870428  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:50.870436  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:50.870502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:50.900076  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:50.900103  346554 cri.go:89] found id: ""
	I1002 07:21:50.900112  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:50.900193  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:50.904363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:50.904461  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:50.932728  346554 cri.go:89] found id: ""
	I1002 07:21:50.932755  346554 logs.go:282] 0 containers: []
	W1002 07:21:50.932775  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:50.932786  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:50.932798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:51.001280  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:50.992878    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.993924    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.994793    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.995597    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:50.997141    7115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:51.001310  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:51.001326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:51.032692  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:51.032721  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:51.086523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:51.086563  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:51.151924  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:51.151959  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:51.181936  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:51.181965  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:51.209313  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:51.209340  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:51.246072  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:51.246103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:51.328956  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:51.328991  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:51.362658  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:51.362692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:51.461576  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:51.461615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:53.981504  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:53.992767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:53.992841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:54.027324  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.027347  346554 cri.go:89] found id: ""
	I1002 07:21:54.027356  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:54.027422  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.031946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:54.032021  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:54.059889  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.059911  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.059916  346554 cri.go:89] found id: ""
	I1002 07:21:54.059924  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:54.059983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.064071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.068437  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:54.068516  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:54.100879  346554 cri.go:89] found id: ""
	I1002 07:21:54.100906  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.100917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:54.100923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:54.101019  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:54.127769  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.127792  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.127798  346554 cri.go:89] found id: ""
	I1002 07:21:54.127806  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:54.127871  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.131837  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.135428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:54.135507  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:54.163909  346554 cri.go:89] found id: ""
	I1002 07:21:54.163934  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.163943  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:54.163950  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:54.164008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:54.195746  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.195778  346554 cri.go:89] found id: ""
	I1002 07:21:54.195787  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:54.195846  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:54.200638  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:54.200733  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:54.228414  346554 cri.go:89] found id: ""
	I1002 07:21:54.228492  346554 logs.go:282] 0 containers: []
	W1002 07:21:54.228518  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:54.228534  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:54.228548  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:54.261854  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:54.261884  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:54.337793  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:54.329984    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.330545    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332031    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.332516    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:54.334074    7268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:54.337814  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:54.337828  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:54.374142  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:54.374176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:54.444394  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:54.444430  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:54.487047  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:54.487074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:54.531639  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:54.531667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:54.639157  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:54.639196  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:54.655755  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:54.655784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:54.685950  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:54.685978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:54.753837  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:54.753879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.341138  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:21:57.351729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:21:57.351806  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:21:57.383937  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.383962  346554 cri.go:89] found id: ""
	I1002 07:21:57.383970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:21:57.384030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.387697  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:21:57.387774  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:21:57.413348  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:57.413372  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.413377  346554 cri.go:89] found id: ""
	I1002 07:21:57.413385  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:21:57.413451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.417397  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.420826  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:21:57.420904  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:21:57.453888  346554 cri.go:89] found id: ""
	I1002 07:21:57.453913  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.453922  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:21:57.453928  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:21:57.453986  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:21:57.483451  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:57.483472  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:21:57.483476  346554 cri.go:89] found id: ""
	I1002 07:21:57.483483  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:21:57.483541  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.487407  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.490932  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:21:57.491034  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:21:57.526291  346554 cri.go:89] found id: ""
	I1002 07:21:57.526318  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.526327  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:21:57.526334  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:21:57.526391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:21:57.554217  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.554297  346554 cri.go:89] found id: ""
	I1002 07:21:57.554320  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:21:57.554415  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:21:57.558417  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:21:57.558494  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:21:57.590610  346554 cri.go:89] found id: ""
	I1002 07:21:57.590632  346554 logs.go:282] 0 containers: []
	W1002 07:21:57.590640  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:21:57.590649  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:21:57.590662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:21:57.686336  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:21:57.686376  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:21:57.717511  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:21:57.717543  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:21:57.754283  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:21:57.754326  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:21:57.785227  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:21:57.785258  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:21:57.869305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:21:57.869342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:21:57.909139  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:21:57.909171  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:21:57.926456  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:21:57.926487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:21:57.995639  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:21:57.987505    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.988090    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.989876    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.990282    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:21:57.991551    7437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:21:57.995664  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:21:57.995679  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:21:58.058207  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:21:58.058248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:21:58.125241  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:21:58.125284  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.654876  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:00.665832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:00.665905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:00.693874  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.693939  346554 cri.go:89] found id: ""
	I1002 07:22:00.693962  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:00.694054  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.697859  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:00.697934  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:00.725245  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.725270  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:00.725276  346554 cri.go:89] found id: ""
	I1002 07:22:00.725284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:00.725364  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.729223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.732817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:00.732935  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:00.758839  346554 cri.go:89] found id: ""
	I1002 07:22:00.758906  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.758929  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:00.758953  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:00.759039  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:00.799071  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:00.799149  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:00.799155  346554 cri.go:89] found id: ""
	I1002 07:22:00.799162  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:00.799234  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.803167  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.806750  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:00.806845  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:00.839560  346554 cri.go:89] found id: ""
	I1002 07:22:00.839587  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.839596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:00.839602  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:00.839660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:00.870224  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:00.870255  346554 cri.go:89] found id: ""
	I1002 07:22:00.870263  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:00.870336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:00.874393  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:00.874495  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:00.912075  346554 cri.go:89] found id: ""
	I1002 07:22:00.912105  346554 logs.go:282] 0 containers: []
	W1002 07:22:00.912114  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:00.912124  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:00.912136  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:00.937824  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:00.937853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:00.995416  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:00.995451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:01.066170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:01.066205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:01.097565  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:01.097596  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:01.177599  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:01.177641  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:01.279014  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:01.279051  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:01.294984  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:01.295013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:01.367956  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:01.359956    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.360472    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362061    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.362543    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:01.364048    7570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:01.368020  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:01.368050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:01.410820  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:01.410865  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:01.438796  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:01.438821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:03.971937  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:03.983881  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:03.983958  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:04.015026  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.015047  346554 cri.go:89] found id: ""
	I1002 07:22:04.015055  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:04.015146  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.019432  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:04.019511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:04.047606  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.047638  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.047644  346554 cri.go:89] found id: ""
	I1002 07:22:04.047651  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:04.047716  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.052312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.055940  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:04.056013  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:04.084749  346554 cri.go:89] found id: ""
	I1002 07:22:04.084774  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.084784  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:04.084791  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:04.084858  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:04.115693  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.115718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:04.115724  346554 cri.go:89] found id: ""
	I1002 07:22:04.115732  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:04.115791  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.119451  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.123387  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:04.123509  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:04.160601  346554 cri.go:89] found id: ""
	I1002 07:22:04.160634  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.160643  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:04.160650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:04.160709  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:04.186914  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.186975  346554 cri.go:89] found id: ""
	I1002 07:22:04.187000  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:04.187074  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:04.190897  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:04.190972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:04.217225  346554 cri.go:89] found id: ""
	I1002 07:22:04.217292  346554 logs.go:282] 0 containers: []
	W1002 07:22:04.217306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:04.217320  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:04.217332  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:04.248848  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:04.248876  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:04.265771  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:04.265801  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:04.331344  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:04.323383    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.324116    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.325749    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.326044    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:04.327474    7683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:04.331380  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:04.331395  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:04.358729  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:04.358757  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:04.416966  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:04.417007  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:04.455261  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:04.455298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:04.483009  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:04.483037  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:04.563547  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:04.563585  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:04.668263  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:04.668301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:04.744129  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:04.744172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.275239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:07.285854  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:07.285925  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:07.312977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.312997  346554 cri.go:89] found id: ""
	I1002 07:22:07.313005  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:07.313060  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.316845  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:07.316920  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:07.346852  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.346874  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.346879  346554 cri.go:89] found id: ""
	I1002 07:22:07.346887  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:07.346943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.350635  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.354162  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:07.354227  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:07.383691  346554 cri.go:89] found id: ""
	I1002 07:22:07.383716  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.383725  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:07.383732  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:07.383790  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:07.412740  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.412762  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.412768  346554 cri.go:89] found id: ""
	I1002 07:22:07.412775  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:07.412874  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.416633  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.420294  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:07.420370  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:07.448452  346554 cri.go:89] found id: ""
	I1002 07:22:07.448481  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.448496  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:07.448503  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:07.448573  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:07.478691  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:07.478759  346554 cri.go:89] found id: ""
	I1002 07:22:07.478782  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:07.478877  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:07.484491  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:07.484566  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:07.526882  346554 cri.go:89] found id: ""
	I1002 07:22:07.526907  346554 logs.go:282] 0 containers: []
	W1002 07:22:07.526916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:07.526926  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:07.526940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:07.543682  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:07.543709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:07.622365  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:07.613920    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.614676    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616380    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.616942    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:07.618513    7807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:07.622386  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:07.622401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:07.688381  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:07.688417  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:07.716317  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:07.716368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:07.765160  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:07.765187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:07.863442  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:07.863480  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:07.890947  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:07.890975  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:07.931413  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:07.931445  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:07.994034  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:07.994116  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:08.029432  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:08.029459  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:10.612654  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:10.624226  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:10.624295  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:10.651797  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:10.651820  346554 cri.go:89] found id: ""
	I1002 07:22:10.651829  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:10.651887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.655778  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:10.655861  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:10.682781  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:10.682804  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:10.682810  346554 cri.go:89] found id: ""
	I1002 07:22:10.682817  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:10.682873  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.686610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.690176  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:10.690248  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:10.716340  346554 cri.go:89] found id: ""
	I1002 07:22:10.716365  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.716374  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:10.716380  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:10.716450  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:10.744916  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:10.744941  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:10.744947  346554 cri.go:89] found id: ""
	I1002 07:22:10.744954  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:10.745009  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.748825  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.752367  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:10.752459  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:10.778426  346554 cri.go:89] found id: ""
	I1002 07:22:10.778491  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.778519  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:10.778545  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:10.778634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:10.816930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:10.816956  346554 cri.go:89] found id: ""
	I1002 07:22:10.816965  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:10.817021  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:10.820675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:10.820748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:10.848624  346554 cri.go:89] found id: ""
	I1002 07:22:10.848692  346554 logs.go:282] 0 containers: []
	W1002 07:22:10.848716  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:10.848747  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:10.848784  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:10.949146  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:10.949183  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:10.966424  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:10.966503  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:11.050571  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:11.041861    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.042811    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044425    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.044785    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:11.047001    7947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:11.050590  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:11.050607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:11.096274  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:11.096305  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:11.163795  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:11.163833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:11.198136  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:11.198167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:11.281776  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:11.281815  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:11.314298  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:11.314329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:11.346046  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:11.346074  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:11.401509  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:11.401546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:13.937437  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:13.948853  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:13.948931  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:13.978524  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:13.978546  346554 cri.go:89] found id: ""
	I1002 07:22:13.978562  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:13.978622  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:13.983904  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:13.984002  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:14.018404  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.018427  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.018432  346554 cri.go:89] found id: ""
	I1002 07:22:14.018441  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:14.018501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.022898  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.027485  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:14.027580  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:14.067189  346554 cri.go:89] found id: ""
	I1002 07:22:14.067277  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.067293  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:14.067301  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:14.067380  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:14.098843  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:14.098868  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.098874  346554 cri.go:89] found id: ""
	I1002 07:22:14.098882  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:14.098938  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.103497  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.107744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:14.107820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:14.136768  346554 cri.go:89] found id: ""
	I1002 07:22:14.136797  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.136807  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:14.136813  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:14.136880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:14.163984  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.164055  346554 cri.go:89] found id: ""
	I1002 07:22:14.164079  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:14.164165  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:14.168259  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:14.168337  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:14.201762  346554 cri.go:89] found id: ""
	I1002 07:22:14.201789  346554 logs.go:282] 0 containers: []
	W1002 07:22:14.201799  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:14.201809  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:14.201822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:14.228036  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:14.228067  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:14.305247  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:14.305286  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:14.417180  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:14.417216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:14.434371  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:14.434404  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:14.494496  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:14.494534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:14.530240  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:14.530274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:14.565285  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:14.565312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:14.656059  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:14.648012    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.648398    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.649913    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.650225    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:14.651841    8113 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:14.656082  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:14.656096  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:14.684431  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:14.684465  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:14.720953  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:14.720987  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.291251  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:17.303244  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:17.303315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:17.330183  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.330208  346554 cri.go:89] found id: ""
	I1002 07:22:17.330217  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:17.330281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.334207  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:17.334281  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:17.363238  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.363263  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.363269  346554 cri.go:89] found id: ""
	I1002 07:22:17.363276  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:17.363331  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.367005  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.370719  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:17.370792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:17.397991  346554 cri.go:89] found id: ""
	I1002 07:22:17.398016  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.398026  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:17.398032  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:17.398092  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:17.431537  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:17.431562  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.431568  346554 cri.go:89] found id: ""
	I1002 07:22:17.431575  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:17.431631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.435774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.439628  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:17.439701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:17.470573  346554 cri.go:89] found id: ""
	I1002 07:22:17.470598  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.470614  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:17.470621  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:17.470689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:17.496787  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:17.496813  346554 cri.go:89] found id: ""
	I1002 07:22:17.496822  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:17.496879  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:17.500676  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:17.500809  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:17.528111  346554 cri.go:89] found id: ""
	I1002 07:22:17.528136  346554 logs.go:282] 0 containers: []
	W1002 07:22:17.528145  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:17.528155  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:17.528167  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:17.629228  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:17.629269  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:17.719781  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:17.711134    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.712057    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713690    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.713991    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:17.715616    8208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:17.719804  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:17.719818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:17.791077  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:17.791176  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:17.835873  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:17.835907  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:17.865669  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:17.865698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:17.947809  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:17.947851  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:17.966021  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:17.966054  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:17.993388  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:17.993419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:18.067826  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:18.067915  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:18.098854  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:18.098928  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:20.640412  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:20.654177  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:20.654280  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:20.689110  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:20.689138  346554 cri.go:89] found id: ""
	I1002 07:22:20.689146  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:20.689210  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.692968  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:20.693043  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:20.726246  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.726271  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:20.726276  346554 cri.go:89] found id: ""
	I1002 07:22:20.726284  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:20.726340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.730329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.734406  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:20.734503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:20.762306  346554 cri.go:89] found id: ""
	I1002 07:22:20.762332  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.762341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:20.762348  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:20.762406  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:20.801345  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:20.801370  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:20.801375  346554 cri.go:89] found id: ""
	I1002 07:22:20.801383  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:20.801461  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.805572  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.809363  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:20.809439  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:20.839370  346554 cri.go:89] found id: ""
	I1002 07:22:20.839396  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.839405  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:20.839411  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:20.839487  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:20.866883  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:20.866908  346554 cri.go:89] found id: ""
	I1002 07:22:20.866918  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:20.866994  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:20.871482  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:20.871602  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:20.915272  346554 cri.go:89] found id: ""
	I1002 07:22:20.915297  346554 logs.go:282] 0 containers: []
	W1002 07:22:20.915306  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:20.915334  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:20.915354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:20.969984  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:20.970023  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:21.008389  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:21.008426  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:21.097527  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:21.097564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:21.131052  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:21.131112  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:21.250056  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:21.250095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:21.266497  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:21.266528  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:21.336488  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:21.328099    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.328680    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330526    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.330860    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:21.332595    8375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:21.336517  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:21.336534  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:21.365447  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:21.365477  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:21.432439  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:21.432517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:21.464158  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:21.464186  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:23.993684  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:24.012128  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:24.012344  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:24.041820  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.041844  346554 cri.go:89] found id: ""
	I1002 07:22:24.041853  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:24.041913  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.045939  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:24.046012  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:24.080951  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.080971  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.080977  346554 cri.go:89] found id: ""
	I1002 07:22:24.080984  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:24.081042  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.086379  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.090878  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:24.090956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:24.118754  346554 cri.go:89] found id: ""
	I1002 07:22:24.118793  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.118803  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:24.118809  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:24.118876  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:24.162937  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.162960  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.162967  346554 cri.go:89] found id: ""
	I1002 07:22:24.162975  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:24.163041  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.167416  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.171521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:24.171612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:24.198740  346554 cri.go:89] found id: ""
	I1002 07:22:24.198764  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.198774  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:24.198780  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:24.198849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:24.226586  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.226607  346554 cri.go:89] found id: ""
	I1002 07:22:24.226616  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:24.226676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:24.230625  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:24.230701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:24.258053  346554 cri.go:89] found id: ""
	I1002 07:22:24.258089  346554 logs.go:282] 0 containers: []
	W1002 07:22:24.258100  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:24.258110  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:24.258122  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:24.357393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:24.357431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:24.375359  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:24.375390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:24.444675  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:24.444714  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:24.484227  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:24.484262  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:24.512674  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:24.512707  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:24.597691  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:24.589362    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.589905    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.591682    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.592352    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:24.593874    8505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:24.597712  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:24.597728  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:24.628466  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:24.628492  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:24.706367  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:24.706408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:24.737446  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:24.737475  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:24.822997  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:24.823036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:27.355482  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:27.366566  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:27.366636  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:27.394804  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:27.394828  346554 cri.go:89] found id: ""
	I1002 07:22:27.394837  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:27.394901  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.398931  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:27.399000  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:27.425553  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.425576  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.425582  346554 cri.go:89] found id: ""
	I1002 07:22:27.425590  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:27.425651  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.429400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.433140  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:27.433237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:27.463605  346554 cri.go:89] found id: ""
	I1002 07:22:27.463626  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.463635  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:27.463642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:27.463701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:27.493043  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.493074  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.493080  346554 cri.go:89] found id: ""
	I1002 07:22:27.493087  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:27.493145  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.497072  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.500729  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:27.500805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:27.531993  346554 cri.go:89] found id: ""
	I1002 07:22:27.532021  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.532031  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:27.532037  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:27.532097  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:27.559232  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.559310  346554 cri.go:89] found id: ""
	I1002 07:22:27.559329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:27.559400  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:27.563624  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:27.563744  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:27.593254  346554 cri.go:89] found id: ""
	I1002 07:22:27.593281  346554 logs.go:282] 0 containers: []
	W1002 07:22:27.593302  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:27.593313  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:27.593328  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:27.622961  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:27.622992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:27.700292  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:27.690392    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.691740    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.692828    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694000    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:27.694658    8617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:27.700315  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:27.700329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:27.760790  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:27.760830  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:27.800937  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:27.800976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:27.879230  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:27.879273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:27.910457  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:27.910561  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:27.998247  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:27.998287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:28.039823  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:28.039856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:28.148384  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:28.148472  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:28.170086  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:28.170114  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.702644  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:30.713672  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:30.713748  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:30.742461  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:30.742484  346554 cri.go:89] found id: ""
	I1002 07:22:30.742493  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:30.742553  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.746359  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:30.746446  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:30.777229  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:30.777256  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:30.777261  346554 cri.go:89] found id: ""
	I1002 07:22:30.777269  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:30.777345  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.781661  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.785300  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:30.785373  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:30.812435  346554 cri.go:89] found id: ""
	I1002 07:22:30.812465  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.812474  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:30.812481  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:30.812558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:30.839730  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:30.839752  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:30.839758  346554 cri.go:89] found id: ""
	I1002 07:22:30.839765  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:30.839851  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.843582  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.847332  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:30.847414  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:30.877768  346554 cri.go:89] found id: ""
	I1002 07:22:30.877795  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.877804  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:30.877811  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:30.877919  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:30.906930  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.906954  346554 cri.go:89] found id: ""
	I1002 07:22:30.906970  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:30.907050  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:30.911004  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:30.911153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:30.936781  346554 cri.go:89] found id: ""
	I1002 07:22:30.936817  346554 logs.go:282] 0 containers: []
	W1002 07:22:30.936826  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:30.936836  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:30.936849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:30.963944  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:30.963978  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:31.039393  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:31.039431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:31.056356  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:31.056396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:31.086443  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:31.086483  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:31.129305  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:31.129342  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:31.206518  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:31.206557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:31.246963  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:31.246992  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:31.349345  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:31.349380  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:31.424210  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:31.415481    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.416258    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.417862    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.418419    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:31.420138    8797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:31.424235  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:31.424247  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:31.494342  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:31.494381  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.028701  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:34.039883  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:34.039955  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:34.082124  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.082149  346554 cri.go:89] found id: ""
	I1002 07:22:34.082158  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:34.082222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.086333  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:34.086408  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:34.115537  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.115562  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.115568  346554 cri.go:89] found id: ""
	I1002 07:22:34.115575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:34.115632  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.119540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.123109  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:34.123181  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:34.149943  346554 cri.go:89] found id: ""
	I1002 07:22:34.149969  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.149978  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:34.149985  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:34.150098  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:34.177023  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.177044  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.177051  346554 cri.go:89] found id: ""
	I1002 07:22:34.177060  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:34.177117  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.180893  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.184341  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:34.184418  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:34.211353  346554 cri.go:89] found id: ""
	I1002 07:22:34.211377  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.211385  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:34.211391  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:34.211449  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:34.237574  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:34.237593  346554 cri.go:89] found id: ""
	I1002 07:22:34.237601  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:34.237659  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:34.241551  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:34.241626  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:34.272007  346554 cri.go:89] found id: ""
	I1002 07:22:34.272030  346554 logs.go:282] 0 containers: []
	W1002 07:22:34.272039  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:34.272048  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:34.272059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:34.344503  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:34.344540  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:34.378151  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:34.378181  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:34.479542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:34.479579  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:34.561912  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:34.553376    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.554044    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.555646    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.556517    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:34.558373    8900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:34.561988  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:34.562009  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:34.627010  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:34.627046  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:34.675398  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:34.675431  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:34.761258  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:34.761301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:34.783800  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:34.783847  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:34.822817  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:34.822856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:34.855272  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:34.855298  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.390316  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:37.401208  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:37.401285  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:37.428835  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:37.428857  346554 cri.go:89] found id: ""
	I1002 07:22:37.428864  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:37.428934  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.433201  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:37.433276  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:37.461633  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.461664  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.461670  346554 cri.go:89] found id: ""
	I1002 07:22:37.461678  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:37.461736  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.465629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.469272  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:37.469348  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:37.498524  346554 cri.go:89] found id: ""
	I1002 07:22:37.498551  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.498561  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:37.498567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:37.498627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:37.535431  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:37.535453  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.535458  346554 cri.go:89] found id: ""
	I1002 07:22:37.535465  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:37.535523  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.539518  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.543351  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:37.543429  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:37.569817  346554 cri.go:89] found id: ""
	I1002 07:22:37.569886  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.569912  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:37.569938  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:37.570048  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:37.600094  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:37.600161  346554 cri.go:89] found id: ""
	I1002 07:22:37.600184  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:37.600279  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:37.604474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:37.604627  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:37.635043  346554 cri.go:89] found id: ""
	I1002 07:22:37.635139  346554 logs.go:282] 0 containers: []
	W1002 07:22:37.635164  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:37.635209  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:37.635241  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:37.652712  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:37.652747  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:37.724304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:37.715214    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.715952    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.717909    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.718653    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:37.720486    9025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:37.724327  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:37.724343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:37.778979  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:37.779018  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:37.823368  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:37.823400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:37.852458  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:37.852487  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:37.935415  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:37.935451  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:38.032660  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:38.032698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:38.062211  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:38.062292  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:38.141041  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:38.141076  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:38.167504  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:38.167535  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:40.716529  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:40.727155  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:40.727237  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:40.759650  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:40.759670  346554 cri.go:89] found id: ""
	I1002 07:22:40.759677  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:40.759739  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.763794  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:40.763891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:40.799428  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:40.799495  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:40.799505  346554 cri.go:89] found id: ""
	I1002 07:22:40.799513  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:40.799587  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.804441  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.808181  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:40.808256  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:40.839434  346554 cri.go:89] found id: ""
	I1002 07:22:40.839458  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.839466  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:40.839479  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:40.839540  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:40.866347  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:40.866368  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:40.866373  346554 cri.go:89] found id: ""
	I1002 07:22:40.866380  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:40.866435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.870243  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.873802  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:40.873887  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:40.915472  346554 cri.go:89] found id: ""
	I1002 07:22:40.915499  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.915508  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:40.915515  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:40.915589  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:40.945530  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:40.945552  346554 cri.go:89] found id: ""
	I1002 07:22:40.945570  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:40.945629  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:40.949410  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:40.949513  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:40.976546  346554 cri.go:89] found id: ""
	I1002 07:22:40.976589  346554 logs.go:282] 0 containers: []
	W1002 07:22:40.976598  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:40.976608  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:40.976620  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:40.993923  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:40.993952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:41.069718  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:41.061732    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.062193    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.063798    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.064141    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:41.065342    9162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:41.069746  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:41.069760  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:41.101275  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:41.101313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:41.185486  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:41.185522  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:41.213391  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:41.213419  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:41.286933  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:41.286973  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:41.325032  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:41.325063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:41.427475  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:41.427517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:41.507722  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:41.507762  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:41.553697  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:41.553731  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.083713  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:44.094946  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:44.095050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:44.122939  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.122961  346554 cri.go:89] found id: ""
	I1002 07:22:44.122970  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:44.123027  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.126926  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:44.127001  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:44.168228  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.168253  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.168259  346554 cri.go:89] found id: ""
	I1002 07:22:44.168267  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:44.168325  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.172203  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.176051  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:44.176154  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:44.207518  346554 cri.go:89] found id: ""
	I1002 07:22:44.207545  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.207554  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:44.207560  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:44.207619  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:44.236177  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:44.236200  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.236206  346554 cri.go:89] found id: ""
	I1002 07:22:44.236214  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:44.236274  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.239868  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.243456  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:44.243575  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:44.269491  346554 cri.go:89] found id: ""
	I1002 07:22:44.269568  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.269596  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:44.269612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:44.269687  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:44.295403  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.295423  346554 cri.go:89] found id: ""
	I1002 07:22:44.295431  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:44.295490  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:44.299440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:44.299555  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:44.333034  346554 cri.go:89] found id: ""
	I1002 07:22:44.333110  346554 logs.go:282] 0 containers: []
	W1002 07:22:44.333136  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:44.333175  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:44.333210  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:44.364108  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:44.364139  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:44.433101  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:44.424314    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.424960    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.426515    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.427164    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:44.428946    9305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:44.433123  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:44.433137  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:44.489676  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:44.489711  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:44.535780  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:44.535819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:44.563832  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:44.563862  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:44.644267  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:44.644308  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:44.678038  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:44.678077  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:44.779429  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:44.779467  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:44.802305  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:44.802335  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:44.828371  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:44.828400  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.412789  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:47.423373  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:47.423464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:47.451136  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.451162  346554 cri.go:89] found id: ""
	I1002 07:22:47.451171  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:47.451237  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.455412  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:47.455531  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:47.487387  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:47.487418  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:47.487424  346554 cri.go:89] found id: ""
	I1002 07:22:47.487432  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:47.487491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.491360  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.495265  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:47.495336  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:47.534120  346554 cri.go:89] found id: ""
	I1002 07:22:47.534144  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.534153  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:47.534159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:47.534223  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:47.567581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.567604  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:47.567610  346554 cri.go:89] found id: ""
	I1002 07:22:47.567618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:47.567676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.571558  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.575428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:47.575500  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:47.604017  346554 cri.go:89] found id: ""
	I1002 07:22:47.604041  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.604050  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:47.604057  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:47.604178  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:47.631246  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.631266  346554 cri.go:89] found id: ""
	I1002 07:22:47.631275  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:47.631336  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:47.635224  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:47.635329  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:47.662879  346554 cri.go:89] found id: ""
	I1002 07:22:47.662906  346554 logs.go:282] 0 containers: []
	W1002 07:22:47.662916  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:47.662925  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:47.662969  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:47.758850  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:47.758889  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:47.787003  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:47.787035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:47.865561  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:47.865598  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:47.894009  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:47.894083  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:47.911472  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:47.911547  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:47.992995  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:47.978023    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.979713    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986171    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.986781    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:47.988190    9464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:47.993061  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:47.993095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:48.054795  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:48.054833  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:48.105647  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:48.105681  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:48.136822  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:48.136852  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:48.221826  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:48.221868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:50.759146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:50.770232  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:50.770304  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:50.808978  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:50.808999  346554 cri.go:89] found id: ""
	I1002 07:22:50.809014  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:50.809071  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.812891  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:50.812973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:50.844548  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:50.844621  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:50.844634  346554 cri.go:89] found id: ""
	I1002 07:22:50.844643  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:50.844704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.848854  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.853318  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:50.853395  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:50.879864  346554 cri.go:89] found id: ""
	I1002 07:22:50.879885  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.879894  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:50.879901  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:50.879978  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:50.913482  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:50.913502  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:50.913506  346554 cri.go:89] found id: ""
	I1002 07:22:50.913514  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:50.913571  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.917411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.920913  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:50.920995  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:50.953742  346554 cri.go:89] found id: ""
	I1002 07:22:50.953769  346554 logs.go:282] 0 containers: []
	W1002 07:22:50.953778  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:50.953785  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:50.953849  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:50.982216  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:50.982239  346554 cri.go:89] found id: ""
	I1002 07:22:50.982247  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:50.982312  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:50.985960  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:50.986036  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:51.023369  346554 cri.go:89] found id: ""
	I1002 07:22:51.023407  346554 logs.go:282] 0 containers: []
	W1002 07:22:51.023416  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:51.023425  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:51.023437  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:51.124423  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:51.124471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:51.162362  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:51.162466  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:51.193077  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:51.193120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:51.209317  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:51.209348  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:51.286706  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:51.277838    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.278649    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280280    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.280639    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:51.282163    9596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:51.286736  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:51.286768  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:51.314928  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:51.315005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:51.375178  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:51.375216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:51.450324  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:51.450368  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:51.478495  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:51.478526  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:51.563131  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:51.563178  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.112345  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:54.123567  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:54.123643  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:54.154215  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.154239  346554 cri.go:89] found id: ""
	I1002 07:22:54.154247  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:54.154306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.158242  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:54.158319  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:54.192307  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.192332  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.192343  346554 cri.go:89] found id: ""
	I1002 07:22:54.192351  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:54.192419  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.197194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.201582  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:54.201705  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:54.228380  346554 cri.go:89] found id: ""
	I1002 07:22:54.228415  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.228425  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:54.228432  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:54.228525  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:54.256056  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.256080  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.256087  346554 cri.go:89] found id: ""
	I1002 07:22:54.256094  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:54.256155  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.260143  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.263934  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:54.264008  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:54.290214  346554 cri.go:89] found id: ""
	I1002 07:22:54.290241  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.290251  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:54.290256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:54.290314  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:54.319063  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.319117  346554 cri.go:89] found id: ""
	I1002 07:22:54.319126  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:54.319184  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:54.323448  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:54.323547  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:54.354341  346554 cri.go:89] found id: ""
	I1002 07:22:54.354366  346554 logs.go:282] 0 containers: []
	W1002 07:22:54.354374  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:54.354384  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:54.354396  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:54.409595  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:54.409633  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:54.449908  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:54.449944  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:54.532130  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:54.532170  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:54.559794  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:54.559822  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:54.593620  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:54.593651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:54.700915  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:54.700951  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:54.727426  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:54.727452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:54.756226  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:54.756263  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:54.841269  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:54.841312  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:54.859387  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:54.859425  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:54.940701  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:54.932413    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.933246    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.934849    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.935238    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:54.936807    9779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.441672  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:22:57.453569  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:22:57.453639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:22:57.483699  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:57.483722  346554 cri.go:89] found id: ""
	I1002 07:22:57.483746  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:22:57.483845  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.487681  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:22:57.487775  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:22:57.518495  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:57.518520  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:22:57.518526  346554 cri.go:89] found id: ""
	I1002 07:22:57.518534  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:22:57.518593  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.522615  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.526448  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:22:57.526523  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:22:57.553219  346554 cri.go:89] found id: ""
	I1002 07:22:57.553246  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.553255  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:22:57.553263  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:22:57.553327  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:22:57.582109  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.582132  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.582137  346554 cri.go:89] found id: ""
	I1002 07:22:57.582146  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:22:57.582209  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.586222  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.590675  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:22:57.590752  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:22:57.621475  346554 cri.go:89] found id: ""
	I1002 07:22:57.621544  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.621567  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:22:57.621592  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:22:57.621680  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:22:57.647238  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:57.647304  346554 cri.go:89] found id: ""
	I1002 07:22:57.647329  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:22:57.647425  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:22:57.651299  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:22:57.651391  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:22:57.681221  346554 cri.go:89] found id: ""
	I1002 07:22:57.681298  346554 logs.go:282] 0 containers: []
	W1002 07:22:57.681324  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:22:57.681350  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:22:57.681387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:22:57.757042  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:22:57.757079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:22:57.789483  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:22:57.789519  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:22:57.876258  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:22:57.876301  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:22:57.909957  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:22:57.909986  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:22:57.994768  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:22:57.985195    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.985977    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.987651    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.988458    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:22:57.990380    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:22:57.994790  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:22:57.994804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:22:58.057805  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:22:58.057845  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:22:58.093196  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:22:58.093227  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:22:58.192017  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:22:58.192055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:22:58.209558  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:22:58.209587  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:22:58.236404  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:22:58.236433  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.781745  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:00.796477  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:00.796552  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:00.823241  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:00.823265  346554 cri.go:89] found id: ""
	I1002 07:23:00.823273  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:00.823327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.827586  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:00.827675  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:00.862251  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:00.862274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:00.862280  346554 cri.go:89] found id: ""
	I1002 07:23:00.862287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:00.862348  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.866453  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.870120  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:00.870189  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:00.910250  346554 cri.go:89] found id: ""
	I1002 07:23:00.910318  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.910341  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:00.910366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:00.910451  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:00.939142  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:00.939208  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:00.939234  346554 cri.go:89] found id: ""
	I1002 07:23:00.939243  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:00.939300  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.943281  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:00.947110  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:00.947180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:00.979402  346554 cri.go:89] found id: ""
	I1002 07:23:00.979431  346554 logs.go:282] 0 containers: []
	W1002 07:23:00.979444  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:00.979452  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:00.979518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:01.016038  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.016103  346554 cri.go:89] found id: ""
	I1002 07:23:01.016131  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:01.016225  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:01.020366  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:01.020520  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:01.049712  346554 cri.go:89] found id: ""
	I1002 07:23:01.049780  346554 logs.go:282] 0 containers: []
	W1002 07:23:01.049803  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:01.049831  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:01.049870  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:01.101253  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:01.101287  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:01.200014  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:01.200053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:01.277860  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:01.264774    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.266699    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.271332    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.272085    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:01.273912    9994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:01.277885  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:01.277898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:01.341507  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:01.341545  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:01.413278  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:01.413313  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:01.446875  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:01.446914  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:01.475436  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:01.475464  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:01.551813  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:01.551853  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:01.585150  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:01.585187  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:01.601574  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:01.601606  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.131042  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:04.142520  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:04.142634  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:04.176669  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:04.176692  346554 cri.go:89] found id: ""
	I1002 07:23:04.176701  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:04.176763  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.180972  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:04.181051  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:04.208821  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.208846  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.208851  346554 cri.go:89] found id: ""
	I1002 07:23:04.208859  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:04.208925  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.213191  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.217006  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:04.217129  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:04.245751  346554 cri.go:89] found id: ""
	I1002 07:23:04.245775  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.245790  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:04.245798  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:04.245859  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:04.284664  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.284685  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.284689  346554 cri.go:89] found id: ""
	I1002 07:23:04.284697  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:04.284756  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.288986  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.292617  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:04.292700  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:04.320145  346554 cri.go:89] found id: ""
	I1002 07:23:04.320171  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.320180  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:04.320187  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:04.320245  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:04.347600  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.347622  346554 cri.go:89] found id: ""
	I1002 07:23:04.347631  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:04.347686  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:04.351440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:04.351511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:04.383653  346554 cri.go:89] found id: ""
	I1002 07:23:04.383732  346554 logs.go:282] 0 containers: []
	W1002 07:23:04.383749  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:04.383759  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:04.383775  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:04.440177  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:04.440218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:04.468956  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:04.469027  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:04.545741  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:04.545780  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:04.579865  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:04.579895  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:04.681656  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:04.681695  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:04.752352  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:04.744202   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.744834   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746456   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.746996   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:04.748061   10154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:04.752373  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:04.752387  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:04.793420  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:04.793493  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:04.864258  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:04.864293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:04.893921  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:04.894006  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:04.911663  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:04.911693  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.444239  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:07.455140  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:07.455218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:07.484101  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.484124  346554 cri.go:89] found id: ""
	I1002 07:23:07.484133  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:07.484189  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.488067  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:07.488145  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:07.522958  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.523021  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.523044  346554 cri.go:89] found id: ""
	I1002 07:23:07.523071  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:07.523194  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.527249  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.531022  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:07.531124  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:07.557498  346554 cri.go:89] found id: ""
	I1002 07:23:07.557519  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.557528  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:07.557535  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:07.557609  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:07.584061  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.584092  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.584096  346554 cri.go:89] found id: ""
	I1002 07:23:07.584105  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:07.584170  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.587957  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.591564  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:07.591639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:07.619944  346554 cri.go:89] found id: ""
	I1002 07:23:07.619971  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.619980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:07.619987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:07.620050  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:07.648834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:07.648855  346554 cri.go:89] found id: ""
	I1002 07:23:07.648863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:07.648919  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:07.652819  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:07.652937  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:07.682396  346554 cri.go:89] found id: ""
	I1002 07:23:07.682421  346554 logs.go:282] 0 containers: []
	W1002 07:23:07.682430  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:07.682439  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:07.682452  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:07.751625  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:07.743061   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.744026   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.745740   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.746058   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:07.747713   10259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:07.751650  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:07.751667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:07.778524  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:07.778551  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:07.850872  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:07.850910  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:07.887246  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:07.887283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:07.959701  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:07.959738  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:07.989632  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:07.989661  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:08.009848  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:08.009885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:08.041024  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:08.041052  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:08.120762  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:08.120798  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:08.174204  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:08.174234  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:10.791227  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:10.804748  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:10.804834  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:10.833209  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:10.833256  346554 cri.go:89] found id: ""
	I1002 07:23:10.833264  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:10.833327  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.837233  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:10.837307  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:10.867407  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:10.867431  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:10.867436  346554 cri.go:89] found id: ""
	I1002 07:23:10.867444  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:10.867501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.871289  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.874962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:10.875041  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:10.909346  346554 cri.go:89] found id: ""
	I1002 07:23:10.909372  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.909381  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:10.909388  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:10.909444  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:10.944052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:10.944127  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:10.944152  346554 cri.go:89] found id: ""
	I1002 07:23:10.944181  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:10.944285  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.952530  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:10.957003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:10.957085  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:10.984253  346554 cri.go:89] found id: ""
	I1002 07:23:10.984287  346554 logs.go:282] 0 containers: []
	W1002 07:23:10.984297  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:10.984321  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:10.984401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:11.018350  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.018417  346554 cri.go:89] found id: ""
	I1002 07:23:11.018442  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:11.018520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:11.022612  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:11.022707  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:11.054294  346554 cri.go:89] found id: ""
	I1002 07:23:11.054371  346554 logs.go:282] 0 containers: []
	W1002 07:23:11.054394  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:11.054437  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:11.054471  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:11.132821  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:11.124867   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.125650   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.126895   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.127432   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:11.129002   10396 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:11.132846  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:11.132859  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:11.161373  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:11.161401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:11.219899  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:11.219936  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:11.250524  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:11.250554  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:11.282533  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:11.282564  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:11.385870  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:11.385909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:11.402968  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:11.402997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:11.447948  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:11.447983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:11.521218  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:11.521256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:11.551246  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:11.551320  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.129146  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:14.140212  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:14.140315  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:14.167561  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:14.167585  346554 cri.go:89] found id: ""
	I1002 07:23:14.167593  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:14.167691  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.171728  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:14.171841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:14.198571  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.198594  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.198600  346554 cri.go:89] found id: ""
	I1002 07:23:14.198607  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:14.198693  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.202658  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.207962  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:14.208057  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:14.233944  346554 cri.go:89] found id: ""
	I1002 07:23:14.233970  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.233979  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:14.233986  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:14.234064  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:14.264854  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.264878  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.264884  346554 cri.go:89] found id: ""
	I1002 07:23:14.264892  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:14.264948  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.268797  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.272677  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:14.272756  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:14.304992  346554 cri.go:89] found id: ""
	I1002 07:23:14.305031  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.305041  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:14.305047  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:14.305120  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:14.335500  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.335570  346554 cri.go:89] found id: ""
	I1002 07:23:14.335593  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:14.335684  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:14.339428  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:14.339502  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:14.366928  346554 cri.go:89] found id: ""
	I1002 07:23:14.366954  346554 logs.go:282] 0 containers: []
	W1002 07:23:14.366964  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:14.366973  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:14.366984  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:14.441765  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:14.441808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:14.473510  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:14.473541  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:14.552162  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:14.552201  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:14.586130  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:14.586160  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:14.602135  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:14.602164  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:14.638523  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:14.638557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:14.717772  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:14.717808  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:14.748211  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:14.748283  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:14.848964  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:14.849003  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:14.926254  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:14.916550   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.917229   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.918910   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.919742   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:14.921374   10589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:14.926277  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:14.926290  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.456912  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:17.467889  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:17.467979  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:17.495434  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:17.495457  346554 cri.go:89] found id: ""
	I1002 07:23:17.495466  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:17.495524  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.499591  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:17.499663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:17.535737  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.535757  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:17.535761  346554 cri.go:89] found id: ""
	I1002 07:23:17.535768  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:17.535826  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.540069  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.543817  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:17.543891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:17.573877  346554 cri.go:89] found id: ""
	I1002 07:23:17.573907  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.573917  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:17.573923  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:17.573989  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:17.609297  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.609320  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:17.609326  346554 cri.go:89] found id: ""
	I1002 07:23:17.609333  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:17.609390  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.613640  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.617183  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:17.617253  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:17.647944  346554 cri.go:89] found id: ""
	I1002 07:23:17.647971  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.647980  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:17.647987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:17.648045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:17.674528  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:17.674552  346554 cri.go:89] found id: ""
	I1002 07:23:17.674561  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:17.674617  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:17.678979  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:17.679143  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:17.706803  346554 cri.go:89] found id: ""
	I1002 07:23:17.706828  346554 logs.go:282] 0 containers: []
	W1002 07:23:17.706837  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:17.706846  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:17.706857  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:17.801171  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:17.801207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:17.817922  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:17.817952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:17.889064  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:17.889103  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:17.971481  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:17.971518  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:18.051668  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:18.051712  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:18.090695  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:18.090723  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:18.162304  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:18.153808   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.154523   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156207   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.156763   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:18.158433   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:18.162328  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:18.162343  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:18.194200  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:18.194233  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:18.231522  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:18.231557  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:18.263215  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:18.263246  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:20.795234  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:20.807871  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:20.807939  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:20.839049  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:20.839070  346554 cri.go:89] found id: ""
	I1002 07:23:20.839098  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:20.839172  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.842946  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:20.843023  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:20.873446  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:20.873469  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:20.873475  346554 cri.go:89] found id: ""
	I1002 07:23:20.873484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:20.873540  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.877435  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.881337  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:20.881415  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:20.918940  346554 cri.go:89] found id: ""
	I1002 07:23:20.918971  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.918980  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:20.918987  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:20.919046  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:20.951052  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:20.951075  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:20.951112  346554 cri.go:89] found id: ""
	I1002 07:23:20.951120  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:20.951185  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.955805  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:20.959649  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:20.959737  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:20.987685  346554 cri.go:89] found id: ""
	I1002 07:23:20.987710  346554 logs.go:282] 0 containers: []
	W1002 07:23:20.987719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:20.987726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:20.987792  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:21.028577  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.028602  346554 cri.go:89] found id: ""
	I1002 07:23:21.028622  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:21.028683  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:21.032899  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:21.032977  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:21.062654  346554 cri.go:89] found id: ""
	I1002 07:23:21.062679  346554 logs.go:282] 0 containers: []
	W1002 07:23:21.062688  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:21.062698  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:21.062710  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:21.091027  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:21.091059  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:21.159267  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:21.159307  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:21.231814  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:21.231856  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:21.263174  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:21.263205  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:21.310161  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:21.310194  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:21.349961  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:21.349997  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:21.379224  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:21.379306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:21.454682  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:21.454722  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:21.560920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:21.560960  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:21.578179  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:21.578211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:21.668218  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:21.658544   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.659665   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.660225   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662214   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:21.662758   10874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.169201  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:24.181390  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:24.181463  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:24.213873  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:24.213896  346554 cri.go:89] found id: ""
	I1002 07:23:24.213905  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:24.213963  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.217730  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:24.217807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:24.252439  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.252471  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.252476  346554 cri.go:89] found id: ""
	I1002 07:23:24.252484  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:24.252567  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.256307  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.260273  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:24.260349  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:24.287826  346554 cri.go:89] found id: ""
	I1002 07:23:24.287852  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.287862  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:24.287870  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:24.287973  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:24.315859  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.315884  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.315890  346554 cri.go:89] found id: ""
	I1002 07:23:24.315897  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:24.315975  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.319993  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.323777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:24.323877  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:24.354601  346554 cri.go:89] found id: ""
	I1002 07:23:24.354631  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.354642  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:24.354648  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:24.354730  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:24.384370  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.384395  346554 cri.go:89] found id: ""
	I1002 07:23:24.384403  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:24.384488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:24.388615  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:24.388695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:24.415488  346554 cri.go:89] found id: ""
	I1002 07:23:24.415514  346554 logs.go:282] 0 containers: []
	W1002 07:23:24.415523  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:24.415533  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:24.415546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:24.458158  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:24.458192  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:24.534624  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:24.534667  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:24.567982  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:24.568016  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:24.596275  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:24.596306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:24.674293  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:24.674334  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:24.777997  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:24.778039  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:24.801006  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:24.801036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:24.862265  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:24.862303  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:24.913721  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:24.913755  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:24.991414  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:24.983196   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.983791   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985038   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.985724   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:24.987370   11003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:24.991443  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:24.991458  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.525665  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:27.536783  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:27.536869  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:27.563440  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.563507  346554 cri.go:89] found id: ""
	I1002 07:23:27.563531  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:27.563623  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.568154  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:27.568278  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:27.597184  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:27.597205  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:27.597211  346554 cri.go:89] found id: ""
	I1002 07:23:27.597230  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:27.597306  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.601073  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.604808  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:27.604880  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:27.635124  346554 cri.go:89] found id: ""
	I1002 07:23:27.635147  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.635155  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:27.635161  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:27.635220  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:27.662383  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:27.662455  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:27.662474  346554 cri.go:89] found id: ""
	I1002 07:23:27.662500  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:27.662607  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.666537  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.670164  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:27.670238  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:27.697001  346554 cri.go:89] found id: ""
	I1002 07:23:27.697028  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.697037  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:27.697044  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:27.697127  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:27.722638  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:27.722662  346554 cri.go:89] found id: ""
	I1002 07:23:27.722672  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:27.722728  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:27.726512  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:27.726591  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:27.755270  346554 cri.go:89] found id: ""
	I1002 07:23:27.755300  346554 logs.go:282] 0 containers: []
	W1002 07:23:27.755309  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:27.755319  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:27.755330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:27.854338  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:27.854379  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:27.928550  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:27.920395   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.921207   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.922978   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.923800   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:27.924646   11084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:27.928577  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:27.928590  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:27.960015  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:27.960047  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:28.025647  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:28.025706  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:28.064089  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:28.064125  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:28.158385  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:28.158423  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:28.196505  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:28.196533  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:28.215893  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:28.215921  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:28.246774  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:28.246821  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:28.274010  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:28.274036  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:30.852724  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:30.863588  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:30.863660  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:30.891349  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:30.891371  346554 cri.go:89] found id: ""
	I1002 07:23:30.891380  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:30.891457  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.895249  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:30.895343  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:30.922333  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:30.922356  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:30.922361  346554 cri.go:89] found id: ""
	I1002 07:23:30.922368  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:30.922423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.926269  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.929885  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:30.929957  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:30.956216  346554 cri.go:89] found id: ""
	I1002 07:23:30.956253  346554 logs.go:282] 0 containers: []
	W1002 07:23:30.956269  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:30.956285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:30.956347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:30.984076  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:30.984101  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:30.984107  346554 cri.go:89] found id: ""
	I1002 07:23:30.984121  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:30.984182  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.988082  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:30.991650  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:30.991741  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:31.028148  346554 cri.go:89] found id: ""
	I1002 07:23:31.028174  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.028184  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:31.028190  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:31.028274  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:31.057090  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.057116  346554 cri.go:89] found id: ""
	I1002 07:23:31.057125  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:31.057195  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:31.064614  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:31.064695  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:31.096928  346554 cri.go:89] found id: ""
	I1002 07:23:31.096996  346554 logs.go:282] 0 containers: []
	W1002 07:23:31.097022  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:31.097042  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:31.097069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:31.155662  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:31.155701  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:31.202926  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:31.202958  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:31.236483  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:31.236508  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:31.341179  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:31.341216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:31.368996  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:31.369022  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:31.449499  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:31.449539  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:31.476326  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:31.476354  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:31.561871  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:31.561909  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:31.597214  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:31.597243  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:31.614646  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:31.614674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:31.686141  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:31.672626   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.673293   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675177   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.675791   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:31.677294   11287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.187051  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:34.198084  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:34.198163  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:34.225977  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.226000  346554 cri.go:89] found id: ""
	I1002 07:23:34.226009  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:34.226094  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.230977  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:34.231053  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:34.258817  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.258840  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:34.258845  346554 cri.go:89] found id: ""
	I1002 07:23:34.258853  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:34.258908  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.262894  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.266671  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:34.266772  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:34.296183  346554 cri.go:89] found id: ""
	I1002 07:23:34.296207  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.296217  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:34.296223  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:34.296283  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:34.329604  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.329678  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.329698  346554 cri.go:89] found id: ""
	I1002 07:23:34.329722  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:34.329830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.333641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.337102  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:34.337170  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:34.365600  346554 cri.go:89] found id: ""
	I1002 07:23:34.365626  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.365636  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:34.365645  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:34.365708  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:34.393323  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.393347  346554 cri.go:89] found id: ""
	I1002 07:23:34.393357  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:34.393439  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:34.397338  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:34.397411  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:34.423876  346554 cri.go:89] found id: ""
	I1002 07:23:34.423899  346554 logs.go:282] 0 containers: []
	W1002 07:23:34.423908  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:34.423918  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:34.423934  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:34.453221  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:34.453251  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:34.481067  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:34.481095  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:34.558614  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:34.558651  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:34.601917  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:34.601948  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:34.705602  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:34.705637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:34.769442  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:34.760694   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.761723   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.762620   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764275   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:34.764621   11388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:34.769466  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:34.769478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:34.808589  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:34.808615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:34.869982  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:34.870024  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:34.959694  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:34.959739  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:34.976284  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:34.976319  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.518488  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:37.530159  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:37.530242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:37.557004  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:37.557026  346554 cri.go:89] found id: ""
	I1002 07:23:37.557035  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:37.557091  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.560903  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:37.560976  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:37.593556  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.593580  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:37.593586  346554 cri.go:89] found id: ""
	I1002 07:23:37.593594  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:37.593652  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.597692  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.601598  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:37.601672  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:37.628723  346554 cri.go:89] found id: ""
	I1002 07:23:37.628751  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.628761  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:37.628767  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:37.628832  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:37.656989  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:37.657010  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:37.657014  346554 cri.go:89] found id: ""
	I1002 07:23:37.657022  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:37.657090  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.660940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.664730  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:37.664810  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:37.690545  346554 cri.go:89] found id: ""
	I1002 07:23:37.690567  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.690575  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:37.690582  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:37.690638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:37.718139  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:37.718164  346554 cri.go:89] found id: ""
	I1002 07:23:37.718173  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:37.718239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:37.722013  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:37.722130  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:37.748320  346554 cri.go:89] found id: ""
	I1002 07:23:37.748387  346554 logs.go:282] 0 containers: []
	W1002 07:23:37.748410  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:37.748439  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:37.748478  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:37.848896  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:37.848937  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:37.935000  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:37.926953   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.927824   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929407   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.929842   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:37.931438   11498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:37.935035  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:37.935050  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:37.998904  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:37.998949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:38.039239  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:38.039274  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:38.133839  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:38.133878  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:38.164590  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:38.164617  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:38.247363  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:38.247401  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:38.263025  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:38.263053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:38.292185  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:38.292215  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:38.324631  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:38.324662  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:40.856053  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:40.866969  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:40.867037  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:40.908779  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:40.908802  346554 cri.go:89] found id: ""
	I1002 07:23:40.908811  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:40.908882  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.912652  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:40.912724  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:40.938681  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:40.938711  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:40.938717  346554 cri.go:89] found id: ""
	I1002 07:23:40.938725  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:40.938780  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.942512  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:40.945790  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:40.945860  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:40.973961  346554 cri.go:89] found id: ""
	I1002 07:23:40.974043  346554 logs.go:282] 0 containers: []
	W1002 07:23:40.974067  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:40.974093  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:40.974208  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:41.001128  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.001152  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.001158  346554 cri.go:89] found id: ""
	I1002 07:23:41.001165  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:41.001239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.007592  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.012525  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:41.012642  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:41.044447  346554 cri.go:89] found id: ""
	I1002 07:23:41.044521  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.044545  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:41.044571  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:41.044654  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:41.083149  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.083216  346554 cri.go:89] found id: ""
	I1002 07:23:41.083250  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:41.083338  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:41.087534  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:41.087663  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:41.118406  346554 cri.go:89] found id: ""
	I1002 07:23:41.118470  346554 logs.go:282] 0 containers: []
	W1002 07:23:41.118494  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:41.118528  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:41.118559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:41.195975  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:41.196011  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:41.227140  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:41.227172  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:41.313141  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:41.313180  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:41.416180  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:41.416218  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:41.459495  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:41.459536  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:41.488753  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:41.488785  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:41.532527  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:41.532560  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:41.548856  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:41.548885  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:41.618600  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:41.608308   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.609017   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.611140   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.612779   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:41.613471   11683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:41.618624  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:41.618638  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:41.646628  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:41.646656  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.221221  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:44.231877  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:44.231950  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:44.257682  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.257714  346554 cri.go:89] found id: ""
	I1002 07:23:44.257724  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:44.257781  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.261470  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:44.261568  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:44.291709  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.291732  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.291738  346554 cri.go:89] found id: ""
	I1002 07:23:44.291749  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:44.291806  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.295774  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.299744  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:44.299891  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:44.326325  346554 cri.go:89] found id: ""
	I1002 07:23:44.326361  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.326372  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:44.326396  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:44.326476  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:44.353658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.353682  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.353687  346554 cri.go:89] found id: ""
	I1002 07:23:44.353694  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:44.353752  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.357660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.361374  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:44.361448  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:44.390237  346554 cri.go:89] found id: ""
	I1002 07:23:44.390271  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.390281  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:44.390287  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:44.390356  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:44.421420  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.421444  346554 cri.go:89] found id: ""
	I1002 07:23:44.421453  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:44.421520  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:44.425406  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:44.425480  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:44.453498  346554 cri.go:89] found id: ""
	I1002 07:23:44.453575  346554 logs.go:282] 0 containers: []
	W1002 07:23:44.453599  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:44.453627  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:44.453663  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:44.469406  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:44.469489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:44.537881  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:44.529402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.530101   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.531787   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.532402   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:44.534048   11772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:44.537947  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:44.537976  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:44.566669  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:44.566750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:44.626234  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:44.626311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:44.663981  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:44.664015  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:44.743176  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:44.743211  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:44.769609  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:44.769637  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:44.850618  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:44.850654  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:44.956047  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:44.956089  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:44.988388  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:44.988421  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:47.617924  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:47.629050  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:47.629142  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:47.657724  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:47.657747  346554 cri.go:89] found id: ""
	I1002 07:23:47.657756  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:47.657814  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.661805  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:47.661878  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:47.691884  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:47.691906  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:47.691911  346554 cri.go:89] found id: ""
	I1002 07:23:47.691919  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:47.691978  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.695983  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.699611  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:47.699685  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:47.731628  346554 cri.go:89] found id: ""
	I1002 07:23:47.731654  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.731664  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:47.731671  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:47.731732  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:47.760694  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:47.760718  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:47.760723  346554 cri.go:89] found id: ""
	I1002 07:23:47.760731  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:47.760830  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.764776  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.768282  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:47.768363  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:47.800941  346554 cri.go:89] found id: ""
	I1002 07:23:47.800967  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.800976  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:47.800982  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:47.801049  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:47.828847  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.828870  346554 cri.go:89] found id: ""
	I1002 07:23:47.828879  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:47.828955  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:47.832777  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:47.832850  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:47.861095  346554 cri.go:89] found id: ""
	I1002 07:23:47.861122  346554 logs.go:282] 0 containers: []
	W1002 07:23:47.861131  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:47.861141  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:47.861184  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:47.893617  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:47.893649  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:47.990939  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:47.990977  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:48.007073  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:48.007153  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:48.043757  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:48.043786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:48.136713  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:48.136750  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:48.168119  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:48.168151  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:48.251880  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:48.251919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:48.285530  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:48.285566  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:48.357500  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:48.349599   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.350239   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.351899   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.352380   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:48.353981   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:48.357522  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:48.357537  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:48.403215  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:48.403293  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.006650  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:51.028354  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:51.028471  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:51.057229  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.057253  346554 cri.go:89] found id: ""
	I1002 07:23:51.057262  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:51.057329  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.061731  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:51.061807  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:51.089750  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.089772  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.089778  346554 cri.go:89] found id: ""
	I1002 07:23:51.089785  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:51.089848  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.094055  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.097989  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:51.098090  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:51.125460  346554 cri.go:89] found id: ""
	I1002 07:23:51.125487  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.125510  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:51.125536  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:51.125611  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:51.155658  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.155684  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.155689  346554 cri.go:89] found id: ""
	I1002 07:23:51.155698  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:51.155757  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.159937  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.164562  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:51.164639  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:51.194590  346554 cri.go:89] found id: ""
	I1002 07:23:51.194626  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.194635  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:51.194642  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:51.194720  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:51.230400  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.230424  346554 cri.go:89] found id: ""
	I1002 07:23:51.230433  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:51.230501  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:51.235241  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:51.235335  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:51.264526  346554 cri.go:89] found id: ""
	I1002 07:23:51.264551  346554 logs.go:282] 0 containers: []
	W1002 07:23:51.264562  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:51.264573  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:51.264603  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:51.292045  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:51.292128  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:51.377066  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:51.377104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:51.408242  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:51.408273  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:51.437071  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:51.437100  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:51.508699  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:51.498128   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.498923   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.500573   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.501129   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:51.502653   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:51.508723  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:51.508736  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:51.594052  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:51.594094  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:51.631968  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:51.632002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:51.710908  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:51.710950  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:51.751275  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:51.751309  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:51.859428  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:51.859510  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.376917  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:54.388247  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:54.388322  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:54.417539  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.417563  346554 cri.go:89] found id: ""
	I1002 07:23:54.417571  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:54.417634  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.421536  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:54.421612  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:54.452318  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.452342  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:54.452347  346554 cri.go:89] found id: ""
	I1002 07:23:54.452355  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:54.452410  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.457434  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.460992  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:54.461070  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:54.494010  346554 cri.go:89] found id: ""
	I1002 07:23:54.494031  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.494040  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:54.494045  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:54.494107  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:54.528280  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:54.528300  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.528305  346554 cri.go:89] found id: ""
	I1002 07:23:54.528312  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:54.528369  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.532283  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.535876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:54.535946  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:54.564214  346554 cri.go:89] found id: ""
	I1002 07:23:54.564240  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.564250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:54.564256  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:54.564347  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:54.594060  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:54.594084  346554 cri.go:89] found id: ""
	I1002 07:23:54.594093  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:54.594169  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:54.598344  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:54.598442  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:54.632402  346554 cri.go:89] found id: ""
	I1002 07:23:54.632426  346554 logs.go:282] 0 containers: []
	W1002 07:23:54.632435  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:54.632445  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:54.632500  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:54.729477  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:54.729517  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:54.800743  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:54.791704   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.792414   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794124   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.794646   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:54.796482   12181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:23:54.800815  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:54.800846  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:54.861032  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:54.861069  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:54.889171  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:54.889244  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:54.925585  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:54.925615  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:54.941174  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:54.941202  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:54.969205  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:54.969235  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:55.020047  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:55.020087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:55.098725  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:55.098805  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:55.132210  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:55.132239  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:57.716428  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:23:57.730713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:23:57.730787  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:23:57.757853  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:57.757878  346554 cri.go:89] found id: ""
	I1002 07:23:57.757887  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:23:57.757943  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.761971  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:23:57.762045  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:23:57.790866  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:57.790891  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.790897  346554 cri.go:89] found id: ""
	I1002 07:23:57.790904  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:23:57.790962  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.795621  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.799575  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:23:57.799653  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:23:57.830281  346554 cri.go:89] found id: ""
	I1002 07:23:57.830307  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.830317  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:23:57.830323  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:23:57.830382  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:23:57.858397  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:57.858420  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:57.858425  346554 cri.go:89] found id: ""
	I1002 07:23:57.858433  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:23:57.858488  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.862244  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.865851  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:23:57.865951  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:23:57.893160  346554 cri.go:89] found id: ""
	I1002 07:23:57.893234  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.893250  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:23:57.893258  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:23:57.893318  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:23:57.920413  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:57.920499  346554 cri.go:89] found id: ""
	I1002 07:23:57.920516  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:23:57.920585  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:23:57.924327  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:23:57.924423  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:23:57.951174  346554 cri.go:89] found id: ""
	I1002 07:23:57.951197  346554 logs.go:282] 0 containers: []
	W1002 07:23:57.951206  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:23:57.951216  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:23:57.951268  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:23:57.986550  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:23:57.986632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:23:58.017224  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:23:58.017260  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:23:58.122339  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:23:58.122377  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:23:58.138465  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:23:58.138494  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:23:58.168292  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:23:58.168317  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:23:58.230852  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:23:58.230890  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:23:58.328715  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:23:58.328764  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:23:58.357761  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:23:58.357792  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:23:58.444436  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:23:58.444482  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:23:58.478280  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:23:58.478306  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:23:58.560395  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:23:58.551535   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.552077   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554124   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.554594   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:23:58.555744   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.061663  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:01.077726  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:01.077804  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:01.106834  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:01.106860  346554 cri.go:89] found id: ""
	I1002 07:24:01.106869  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:01.106940  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.110940  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:01.111014  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:01.139370  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.139392  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.139397  346554 cri.go:89] found id: ""
	I1002 07:24:01.139404  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:01.139466  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.143857  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.148114  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:01.148207  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:01.178376  346554 cri.go:89] found id: ""
	I1002 07:24:01.178468  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.178493  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:01.178522  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:01.178635  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:01.208075  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.208098  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.208103  346554 cri.go:89] found id: ""
	I1002 07:24:01.208111  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:01.208178  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.212014  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.216098  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:01.216233  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:01.245384  346554 cri.go:89] found id: ""
	I1002 07:24:01.245424  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.245434  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:01.245440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:01.245503  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:01.282247  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.282322  346554 cri.go:89] found id: ""
	I1002 07:24:01.282346  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:01.282443  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:01.288826  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:01.288905  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:01.319901  346554 cri.go:89] found id: ""
	I1002 07:24:01.319926  346554 logs.go:282] 0 containers: []
	W1002 07:24:01.319934  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:01.319943  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:01.319956  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:01.389606  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:01.389692  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:01.444021  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:01.444055  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:01.526762  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:01.526804  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:01.559019  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:01.559049  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:01.634782  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:01.634818  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:01.709026  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:01.699679   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.700913   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.701980   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.702845   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:01.704779   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:01.709100  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:01.709120  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:01.738970  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:01.739000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:01.770329  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:01.770364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:01.884154  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:01.884232  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:01.902364  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:01.902390  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.435943  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:04.447669  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:04.447785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:04.478942  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.478965  346554 cri.go:89] found id: ""
	I1002 07:24:04.478974  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:04.479030  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.483417  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:04.483511  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:04.518294  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:04.518320  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.518325  346554 cri.go:89] found id: ""
	I1002 07:24:04.518334  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:04.518388  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.522223  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.526427  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:04.526558  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:04.558950  346554 cri.go:89] found id: ""
	I1002 07:24:04.558987  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.558996  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:04.559003  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:04.559153  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:04.586620  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:04.586645  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:04.586650  346554 cri.go:89] found id: ""
	I1002 07:24:04.586658  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:04.586737  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.590676  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.594540  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:04.594644  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:04.621686  346554 cri.go:89] found id: ""
	I1002 07:24:04.621709  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.621719  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:04.621725  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:04.621781  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:04.649834  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:04.649855  346554 cri.go:89] found id: ""
	I1002 07:24:04.649863  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:04.649944  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:04.654335  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:04.654436  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:04.687143  346554 cri.go:89] found id: ""
	I1002 07:24:04.687166  346554 logs.go:282] 0 containers: []
	W1002 07:24:04.687175  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:04.687184  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:04.687216  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:04.715298  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:04.715329  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:04.758402  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:04.758436  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:04.838751  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:04.838789  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:04.870372  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:04.870403  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:04.984168  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:04.984207  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:04.999826  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:04.999858  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:05.088672  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:05.079342   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.080234   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082236   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.082893   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:05.084684   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:05.088696  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:05.088709  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:05.150024  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:05.150063  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:05.226780  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:05.226819  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:05.255567  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:05.255605  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.791197  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:07.803594  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:07.803689  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:07.833077  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:07.833103  346554 cri.go:89] found id: ""
	I1002 07:24:07.833113  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:07.833214  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.837537  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:07.837661  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:07.866899  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:07.866926  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:07.866932  346554 cri.go:89] found id: ""
	I1002 07:24:07.866939  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:07.867000  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.870759  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.874593  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:07.874713  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:07.903524  346554 cri.go:89] found id: ""
	I1002 07:24:07.903587  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.903620  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:07.903644  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:07.903738  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:07.934472  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:07.934547  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:07.934567  346554 cri.go:89] found id: ""
	I1002 07:24:07.934593  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:07.934688  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.938660  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:07.942349  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:07.942453  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:07.969924  346554 cri.go:89] found id: ""
	I1002 07:24:07.969947  346554 logs.go:282] 0 containers: []
	W1002 07:24:07.969956  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:07.969964  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:07.970022  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:07.998801  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:07.998826  346554 cri.go:89] found id: ""
	I1002 07:24:07.998834  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:07.998890  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:08.006051  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:08.006218  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:08.043683  346554 cri.go:89] found id: ""
	I1002 07:24:08.043712  346554 logs.go:282] 0 containers: []
	W1002 07:24:08.043723  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:08.043733  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:08.043746  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:08.094506  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:08.094546  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:08.175873  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:08.175912  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:08.208161  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:08.208191  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:08.234954  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:08.234983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:08.301287  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:08.301325  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:08.377087  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:08.377123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:08.405378  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:08.405407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:08.431355  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:08.431386  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:08.536433  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:08.536479  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:08.553542  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:08.553575  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:08.621305  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:08.613680   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.614222   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.615692   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.616097   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:08.617557   12800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.122975  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:11.135150  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:11.135231  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:11.168608  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.168633  346554 cri.go:89] found id: ""
	I1002 07:24:11.168642  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:11.168704  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.172810  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:11.172893  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:11.204325  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.204401  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.204413  346554 cri.go:89] found id: ""
	I1002 07:24:11.204422  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:11.204491  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.208514  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.212208  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:11.212287  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:11.245698  346554 cri.go:89] found id: ""
	I1002 07:24:11.245725  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.245736  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:11.245743  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:11.245805  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:11.274196  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.274219  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:11.274224  346554 cri.go:89] found id: ""
	I1002 07:24:11.274231  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:11.274292  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.278411  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.282735  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:11.282813  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:11.322108  346554 cri.go:89] found id: ""
	I1002 07:24:11.322129  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.322138  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:11.322144  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:11.322203  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:11.350582  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.350647  346554 cri.go:89] found id: ""
	I1002 07:24:11.350659  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:11.350715  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:11.354559  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:11.354628  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:11.386834  346554 cri.go:89] found id: ""
	I1002 07:24:11.386899  346554 logs.go:282] 0 containers: []
	W1002 07:24:11.386923  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:11.386951  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:11.386981  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:11.465595  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:11.465632  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:11.541894  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:11.541933  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:11.619365  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:11.619408  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:11.647305  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:11.647336  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:11.686923  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:11.686952  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:11.792344  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:11.792440  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:11.814593  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:11.814623  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:11.895211  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:11.886121   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.886872   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.888767   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.889333   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:11.890295   12915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:11.895236  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:11.895250  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:11.921556  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:11.921586  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:11.957833  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:11.957872  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.490490  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:14.502377  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:14.502482  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:14.534162  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.534185  346554 cri.go:89] found id: ""
	I1002 07:24:14.534205  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:14.534262  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.538631  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:14.538701  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:14.568427  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.568450  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.568456  346554 cri.go:89] found id: ""
	I1002 07:24:14.568463  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:14.568527  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.572917  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.576683  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:14.576760  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:14.604778  346554 cri.go:89] found id: ""
	I1002 07:24:14.604809  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.604819  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:14.604825  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:14.604932  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:14.631788  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:14.631812  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:14.631817  346554 cri.go:89] found id: ""
	I1002 07:24:14.631824  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:14.631887  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.635951  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.639653  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:14.639769  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:14.682797  346554 cri.go:89] found id: ""
	I1002 07:24:14.682823  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.682832  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:14.682839  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:14.682899  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:14.722146  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:14.722175  346554 cri.go:89] found id: ""
	I1002 07:24:14.722183  346554 logs.go:282] 1 containers: [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:14.722239  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:14.727035  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:14.727164  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:14.759413  346554 cri.go:89] found id: ""
	I1002 07:24:14.759438  346554 logs.go:282] 0 containers: []
	W1002 07:24:14.759447  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:14.759458  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:14.759470  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:14.786929  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:14.787000  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:14.853005  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:14.853042  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:14.899040  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:14.899071  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:15.004708  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:15.004742  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:15.123051  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:15.123106  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:15.154325  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:15.154357  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:15.183161  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:15.183248  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:15.265975  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:15.266013  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:15.299575  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:15.299607  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:15.315427  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:15.315454  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:15.394115  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:15.385425   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.386315   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388134   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.388810   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:15.390355   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:17.895569  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:17.909876  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:17.909985  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:17.941059  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:17.941083  346554 cri.go:89] found id: ""
	I1002 07:24:17.941092  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:17.941159  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.945318  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:17.945401  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:17.973722  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:17.973743  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:17.973747  346554 cri.go:89] found id: ""
	I1002 07:24:17.973755  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:17.973813  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.978340  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:17.983135  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:17.983214  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:18.024398  346554 cri.go:89] found id: ""
	I1002 07:24:18.024424  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.024433  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:18.024440  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:18.024518  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:18.053513  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.053535  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.053540  346554 cri.go:89] found id: ""
	I1002 07:24:18.053548  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:18.053631  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.057706  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.061744  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:18.061820  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:18.093847  346554 cri.go:89] found id: ""
	I1002 07:24:18.093873  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.093884  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:18.093891  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:18.093956  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:18.123256  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.123283  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.123289  346554 cri.go:89] found id: ""
	I1002 07:24:18.123296  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:18.123355  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.127263  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:18.131206  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:18.131284  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:18.157688  346554 cri.go:89] found id: ""
	I1002 07:24:18.157714  346554 logs.go:282] 0 containers: []
	W1002 07:24:18.157724  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:18.157733  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:18.157745  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:18.203920  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:18.203946  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:18.220036  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:18.220064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:18.288859  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:18.281281   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.282404   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283332   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.283985   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:18.285062   13172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:18.288885  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:18.288898  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:18.326029  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:18.326064  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:18.410880  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:18.410919  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:18.516955  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:18.516994  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:18.548753  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:18.548786  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:18.613812  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:18.613849  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:18.643416  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:18.643444  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:18.670170  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:18.670199  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:18.699194  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:18.699231  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.274356  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:21.285713  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:21.285785  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:21.312389  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.312413  346554 cri.go:89] found id: ""
	I1002 07:24:21.312427  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:21.312492  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.316212  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:21.316290  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:21.341368  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:21.341390  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.341396  346554 cri.go:89] found id: ""
	I1002 07:24:21.341403  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:21.341458  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.345157  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.348764  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:21.348841  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:21.381263  346554 cri.go:89] found id: ""
	I1002 07:24:21.381292  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.381302  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:21.381308  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:21.381366  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:21.412001  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:21.412022  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.412027  346554 cri.go:89] found id: ""
	I1002 07:24:21.412035  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:21.412092  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.415991  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.419745  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:21.419818  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:21.448790  346554 cri.go:89] found id: ""
	I1002 07:24:21.448817  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.448826  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:21.448832  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:21.448894  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:21.476863  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.476885  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.476890  346554 cri.go:89] found id: ""
	I1002 07:24:21.476897  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:21.476995  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.481180  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:21.484939  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:21.485015  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:21.518979  346554 cri.go:89] found id: ""
	I1002 07:24:21.519005  346554 logs.go:282] 0 containers: []
	W1002 07:24:21.519014  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:21.519023  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:21.519035  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:21.548837  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:21.548868  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:21.577649  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:21.577678  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:21.614505  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:21.614538  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:21.648602  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:21.648630  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:21.730478  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:21.730515  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:21.770385  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:21.770420  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:21.869953  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:21.869990  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:21.890825  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:21.890864  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:21.963492  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:21.954886   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.955596   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957198   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.957744   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:21.959330   13353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:21.963514  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:21.963531  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:21.990531  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:21.990559  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:22.069923  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:22.070005  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.652448  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:24.663850  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:24.663928  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:24.691270  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:24.691349  346554 cri.go:89] found id: ""
	I1002 07:24:24.691385  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:24.691483  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.695776  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:24.695846  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:24.722540  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:24.722563  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:24.722568  346554 cri.go:89] found id: ""
	I1002 07:24:24.722575  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:24.722641  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.726529  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.730111  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:24.730184  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:24.760973  346554 cri.go:89] found id: ""
	I1002 07:24:24.760999  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.761009  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:24.761015  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:24.761096  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:24.788682  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:24.788702  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:24.788707  346554 cri.go:89] found id: ""
	I1002 07:24:24.788714  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:24.788771  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.795284  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.800831  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:24.800927  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:24.826399  346554 cri.go:89] found id: ""
	I1002 07:24:24.826434  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.826443  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:24.826464  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:24.826550  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:24.854301  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:24.854328  346554 cri.go:89] found id: "fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:24.854334  346554 cri.go:89] found id: ""
	I1002 07:24:24.854341  346554 logs.go:282] 2 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430]
	I1002 07:24:24.854423  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.858547  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:24.862285  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:24.862407  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:24.892024  346554 cri.go:89] found id: ""
	I1002 07:24:24.892048  346554 logs.go:282] 0 containers: []
	W1002 07:24:24.892057  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:24.892067  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:24.892079  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:24.993633  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:24.993672  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:25.023967  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:25.023999  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:25.088069  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:25.088104  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:25.171716  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:25.171754  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:25.211296  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:25.211330  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:25.277865  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:25.269711   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.270447   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272032   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.272563   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:25.274098   13474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:25.277888  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:25.277901  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:25.305336  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:25.305363  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:25.339149  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:25.339311  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:25.419370  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:25.419407  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:25.452415  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:25.452447  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:25.482792  346554 logs.go:123] Gathering logs for kube-controller-manager [fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430] ...
	I1002 07:24:25.482824  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fbcd761dfccd33c704bbe54d4fd03e8384b2136707422918d90227fc19bdf430"
	I1002 07:24:28.019833  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:28.031976  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:28.032047  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:28.061518  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.061538  346554 cri.go:89] found id: ""
	I1002 07:24:28.061547  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:28.061610  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.065737  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:28.065812  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:28.100250  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.100274  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.100280  346554 cri.go:89] found id: ""
	I1002 07:24:28.100287  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:28.100347  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.104729  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.109130  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:28.109242  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:28.136194  346554 cri.go:89] found id: ""
	I1002 07:24:28.136220  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.136229  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:28.136235  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:28.136294  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:28.177728  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.177751  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.177756  346554 cri.go:89] found id: ""
	I1002 07:24:28.177764  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:28.177822  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.182057  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.185909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:28.185984  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:28.213081  346554 cri.go:89] found id: ""
	I1002 07:24:28.213104  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.213114  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:28.213120  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:28.213180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:28.242037  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.242061  346554 cri.go:89] found id: ""
	I1002 07:24:28.242070  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:28.242125  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:28.245909  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:28.245982  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:28.272643  346554 cri.go:89] found id: ""
	I1002 07:24:28.272688  346554 logs.go:282] 0 containers: []
	W1002 07:24:28.272698  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:28.272708  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:28.272741  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:28.368590  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:28.368674  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:28.441922  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:28.433374   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.434538   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.435818   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.436626   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:28.438305   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:28.441993  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:28.442025  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:28.485137  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:28.485174  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:28.519916  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:28.519949  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:28.547334  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:28.547364  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:28.578668  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:28.578698  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:28.597024  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:28.597053  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:28.625533  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:28.625562  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:28.703945  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:28.703983  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:28.782221  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:28.782256  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.363217  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:31.375576  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:24:31.375651  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:24:31.412392  346554 cri.go:89] found id: "fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.412416  346554 cri.go:89] found id: ""
	I1002 07:24:31.412425  346554 logs.go:282] 1 containers: [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10]
	I1002 07:24:31.412489  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.416397  346554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:24:31.416497  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:24:31.447142  346554 cri.go:89] found id: "e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:31.447172  346554 cri.go:89] found id: "930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.447178  346554 cri.go:89] found id: ""
	I1002 07:24:31.447186  346554 logs.go:282] 2 containers: [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707]
	I1002 07:24:31.447245  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.451130  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.454872  346554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:24:31.454972  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:24:31.491372  346554 cri.go:89] found id: ""
	I1002 07:24:31.491393  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.491401  346554 logs.go:284] No container was found matching "coredns"
	I1002 07:24:31.491407  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:24:31.491464  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:24:31.523581  346554 cri.go:89] found id: "68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:31.523606  346554 cri.go:89] found id: "0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.523611  346554 cri.go:89] found id: ""
	I1002 07:24:31.523618  346554 logs.go:282] 2 containers: [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209]
	I1002 07:24:31.523696  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.527714  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.531521  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:24:31.531638  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:24:31.557016  346554 cri.go:89] found id: ""
	I1002 07:24:31.557090  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.557110  346554 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:24:31.557117  346554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:24:31.557180  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:24:31.587792  346554 cri.go:89] found id: "38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:31.587815  346554 cri.go:89] found id: ""
	I1002 07:24:31.587824  346554 logs.go:282] 1 containers: [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd]
	I1002 07:24:31.587900  346554 ssh_runner.go:195] Run: which crictl
	I1002 07:24:31.591474  346554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:24:31.591544  346554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:24:31.621938  346554 cri.go:89] found id: ""
	I1002 07:24:31.622002  346554 logs.go:282] 0 containers: []
	W1002 07:24:31.622025  346554 logs.go:284] No container was found matching "kindnet"
	I1002 07:24:31.622057  346554 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:24:31.622087  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:24:31.699830  346554 logs.go:123] Gathering logs for container status ...
	I1002 07:24:31.699940  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:24:31.731270  346554 logs.go:123] Gathering logs for kubelet ...
	I1002 07:24:31.731297  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:24:31.830036  346554 logs.go:123] Gathering logs for dmesg ...
	I1002 07:24:31.830073  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:24:31.849448  346554 logs.go:123] Gathering logs for kube-apiserver [fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10] ...
	I1002 07:24:31.849489  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc952c5b63c2f74f804f65c7d5fd74a7fa8b2a1a5441b7f71bb6b75e7f445c10"
	I1002 07:24:31.887973  346554 logs.go:123] Gathering logs for etcd [930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707] ...
	I1002 07:24:31.888002  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 930e71c7a1e5c6f0017ec20c2fd5f6d349121e6eab5456ec8c87bcfabdc4d707"
	I1002 07:24:31.925845  346554 logs.go:123] Gathering logs for kube-scheduler [0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209] ...
	I1002 07:24:31.925879  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a499d2953da9ba8ca7d81090ec0d9dcde4accb1c3ac634113057739e4bef209"
	I1002 07:24:31.955314  346554 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:24:31.955344  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:24:32.027448  346554 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 07:24:32.017106   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.018245   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.019008   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.021153   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 07:24:32.022262   13778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:24:32.027527  346554 logs.go:123] Gathering logs for etcd [e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3] ...
	I1002 07:24:32.027556  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1c359884f733f531dc674669d039630bcab3a45e1b0a3f64243e180cd286ec3"
	I1002 07:24:32.097086  346554 logs.go:123] Gathering logs for kube-scheduler [68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893] ...
	I1002 07:24:32.097123  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 68312e0ba61f15ef960624743605a0b4a92369bfd16bf4c613a4580587fb5893"
	I1002 07:24:32.181841  346554 logs.go:123] Gathering logs for kube-controller-manager [38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd] ...
	I1002 07:24:32.181877  346554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38511d610df8837a90390498d95bb86185031e08487838d7b8bc82f06e271ecd"
	I1002 07:24:34.710633  346554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:24:34.725897  346554 out.go:203] 
	W1002 07:24:34.728826  346554 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1002 07:24:34.728867  346554 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1002 07:24:34.728877  346554 out.go:285] * Related issues:
	W1002 07:24:34.728892  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1002 07:24:34.728908  346554 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1002 07:24:34.732168  346554 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 07:19:49 ha-550225 crio[619]: time="2025-10-02T07:19:49.845674437Z" level=info msg="Started container" PID=1394 containerID=3269c04f5498e2befbc42b6cf2cdbe83a291623d3fde767dc07389c7422afd48 description=kube-system/coredns-66bc5c9577-s6dq8/coredns id=566bb378-7524-4452-b1e6-a25280ba5d7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e055873f04c2899609f0c3b597c607526b01fd136aa0e5f79f2676a446255f13
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.208804519Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215218136Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215264529Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.215287667Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.22352303Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.223562538Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.223586029Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.23080621Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.230844857Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.230864434Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.236373132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:19:58 ha-550225 crio[619]: time="2025-10-02T07:19:58.236409153Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:20:15 ha-550225 conmon[1183]: conmon 48fccb25ba33b3850afc <ninfo>: container 1186 exited with status 1
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.461105809Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5008df2b-58c5-42b1-a1f6-e14a10f1abbb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.46213329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b8ddfc43-aba7-4f99-b91d-97240f3eaf35 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.46331964Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=55bd6811-47fe-4715-9579-6244ca41dc93 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.463596057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.472956017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.47327584Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6958a022ca5d2e537c24f18da644191de8f0c379072dbf05004476abea1680e8/merged/etc/passwd: no such file or directory"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.473326269Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6958a022ca5d2e537c24f18da644191de8f0c379072dbf05004476abea1680e8/merged/etc/group: no such file or directory"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.473692689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.493904849Z" level=info msg="Created container 5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71: kube-system/storage-provisioner/storage-provisioner" id=55bd6811-47fe-4715-9579-6244ca41dc93 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.495150407Z" level=info msg="Starting container: 5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71" id=b45832b0-a0c9-4ad1-8a10-5fba7e2ccb21 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:20:16 ha-550225 crio[619]: time="2025-10-02T07:20:16.499183546Z" level=info msg="Started container" PID=1457 containerID=5b2624a029b4c010b76ac52edd332193351ee65c37100ef8fbe63d85d02c3e71 description=kube-system/storage-provisioner/storage-provisioner id=b45832b0-a0c9-4ad1-8a10-5fba7e2ccb21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc2b31ede15861c2d07fce3991053334dcdd31f17b14021784ac1be8ed7e0b31
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5b2624a029b4c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       2                   bc2b31ede1586       storage-provisioner                 kube-system
	3269c04f5498e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   e055873f04c28       coredns-66bc5c9577-s6dq8            kube-system
	448d4967d9024       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   e934129b46d08       busybox-7b57f96db7-gph4b            default
	8a9ee715e4343       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   edd2550dab874       kindnet-v7wnc                       kube-system
	5051222f30f0a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   3e269f3dd585c       kube-proxy-skqs2                    kube-system
	48fccb25ba33b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   bc2b31ede1586       storage-provisioner                 kube-system
	97a0ea46cf7f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   70fe4e27581bb       coredns-66bc5c9577-7gnh8            kube-system
	0dcd791f01f43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   11                  19a2185d4a1eb       kube-controller-manager-ha-550225   kube-system
	8290015e8c15e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            10                  b2181fe55e225       kube-apiserver-ha-550225            kube-system
	29394f92b6a36       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   10                  19a2185d4a1eb       kube-controller-manager-ha-550225   kube-system
	5b0c0535da780       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            9                   b2181fe55e225       kube-apiserver-ha-550225            kube-system
	5f7223d3b4009       27aa99ef07bb63db109cae7189f6029203a1ba86e8d201ca72eb836e3cdd0b43   8 minutes ago       Running             kube-vip                  1                   c455a5f1f2468       kube-vip-ha-550225                  kube-system
	43f493b22d959       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      3                   8c156781bf4ef       etcd-ha-550225                      kube-system
	2b4cd729501f6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            2                   b0329f645e59c       kube-scheduler-ha-550225            kube-system
	
	
	==> coredns [3269c04f5498e2befbc42b6cf2cdbe83a291623d3fde767dc07389c7422afd48] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50597 - 50866 "HINFO IN 2471821353559588233.5453610813505731232. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027203243s
	
	
	==> coredns [97a0ea46cf7f751b62a77918089760dd2e292198c9c2fc951fc282e4636ba492] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56369 - 30635 "HINFO IN 7137530019898463004.8479900960678889237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.018878387s
	[INFO] 127.0.0.1:38056 - 50955 "HINFO IN 7137530019898463004.8479900960678889237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041678969s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-550225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_03_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:02:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:24:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:02:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:21:51 +0000   Thu, 02 Oct 2025 07:03:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-550225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 804fc56d691a47babcd58cd3553282d3
	  System UUID:                96b9796d-f076-4bf0-ac0e-2eccc9d5873e
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gph4b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-7gnh8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 coredns-66bc5c9577-s6dq8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     21m
	  kube-system                 etcd-ha-550225                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-v7wnc                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-550225             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-550225    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-skqs2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-550225             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-550225                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 21m                  kube-proxy       
	  Normal   Starting                 5m7s                 kube-proxy       
	  Normal   Starting                 22m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 22m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  22m (x8 over 22m)    kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     22m (x8 over 22m)    kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    22m (x8 over 22m)    kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasNoDiskPressure    21m                  kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                  kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21m                  kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   Starting                 21m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 21m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           21m                  node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   RegisteredNode           21m                  node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   NodeReady                21m                  kubelet          Node ha-550225 status is now: NodeReady
	  Normal   RegisteredNode           19m                  node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	  Normal   Starting                 8m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m2s (x8 over 8m2s)  kubelet          Node ha-550225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m2s (x8 over 8m2s)  kubelet          Node ha-550225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m2s (x8 over 8m2s)  kubelet          Node ha-550225 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m53s                node-controller  Node ha-550225 event: Registered Node ha-550225 in Controller
	
	
	Name:               ha-550225-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_03_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:03:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:21 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:08:20 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-550225-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 08dcc5805aac4edbab34bc4710db5eef
	  System UUID:                c6a05e31-956b-4e2f-af6e-62090982b7b4
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wbl7l                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-ha-550225-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         21m
	  kube-system                 kindnet-n6kwf                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      21m
	  kube-system                 kube-apiserver-ha-550225-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-550225-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-jkkmq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-550225-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-550225-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   RegisteredNode           21m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           21m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           19m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node ha-550225-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   RegisteredNode           5m53s              node-controller  Node ha-550225-m02 event: Registered Node ha-550225-m02 in Controller
	  Normal   NodeNotReady             5m3s               node-controller  Node ha-550225-m02 status is now: NodeNotReady
	
	
	Name:               ha-550225-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_04_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:04:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:06:30 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-550225-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 315218fdc78646b99ded6becf46edf67
	  System UUID:                4ea95856-3488-4a4f-b299-e71342dd8d89
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-q95k5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-ha-550225-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-2w4k5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-550225-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-550225-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-2k945                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-550225-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-550225-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        19m    kube-proxy       
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  19m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  16m    node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  RegisteredNode  5m53s  node-controller  Node ha-550225-m03 event: Registered Node ha-550225-m03 in Controller
	  Normal  NodeNotReady    5m3s   node-controller  Node ha-550225-m03 status is now: NodeNotReady
	
	
	Name:               ha-550225-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-550225-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=ha-550225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_02T07_06_15_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:06:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-550225-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:08:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 02 Oct 2025 07:06:58 +0000   Thu, 02 Oct 2025 07:19:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-550225-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 4bfee30c7b434881a054adc06b7ffd73
	  System UUID:                9c87cedb-25ad-496a-a907-0c95201b1fe7
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2h5qc       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-proxy-gf52r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeHasSufficientMemory  18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x4 over 18m)  kubelet          Node ha-550225-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-550225-m04 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  RegisteredNode           5m53s              node-controller  Node ha-550225-m04 event: Registered Node ha-550225-m04 in Controller
	  Normal  NodeNotReady             5m3s               node-controller  Node ha-550225-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct 2 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014797] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.531434] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.039899] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787301] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.571073] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 2 05:52] hrtimer: interrupt took 24222969 ns
	[Oct 2 06:40] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:42] overlayfs: idmapped layers are currently not supported
	[  +0.072713] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 06:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 06:49] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:03] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:16] overlayfs: idmapped layers are currently not supported
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [43f493b22d959eb4018498d0af4c8a03328857db3567f13cb0ffaee9ec06c00b] <==
	{"level":"warn","ts":"2025-10-02T07:24:53.579189Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.679068Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.698745Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.709000Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.712056Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.716863Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.725570Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.735134Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.740585Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.743692Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.748336Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.756396Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.783585Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.783789Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.807618Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.824436Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.841668Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.854261Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.863546Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.867797Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.871341Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.874401Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.879265Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.883994Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-02T07:24:53.893486Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"340e91ee989e8740","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 07:24:53 up  2:07,  0 user,  load average: 1.73, 1.09, 1.17
	Linux ha-550225 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a9ee715e43431e349cf8c9be623f1a296d01184f3204e6a4a0f8394fc70358e] <==
	I1002 07:24:18.215444       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:28.207379       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:28.207511       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:28.207747       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:28.207827       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:28.207968       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:28.208017       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:28.208188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:28.208240       1 main.go:301] handling current node
	I1002 07:24:38.211259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:38.211291       1 main.go:301] handling current node
	I1002 07:24:38.211307       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:38.211313       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	I1002 07:24:38.211454       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:38.211461       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:38.211513       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:38.211519       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:48.211187       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1002 07:24:48.211220       1 main.go:324] Node ha-550225-m03 has CIDR [10.244.2.0/24] 
	I1002 07:24:48.211353       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1002 07:24:48.211359       1 main.go:324] Node ha-550225-m04 has CIDR [10.244.3.0/24] 
	I1002 07:24:48.211418       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:48.211425       1 main.go:301] handling current node
	I1002 07:24:48.211436       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1002 07:24:48.211441       1 main.go:324] Node ha-550225-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5b0c0535da7807f278c4629073d71180fc43a369ddae7136c7ffd515a7e95c6b] <==
	I1002 07:18:00.892979       1 server.go:150] Version: v1.34.1
	I1002 07:18:00.893076       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1002 07:18:02.015138       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1002 07:18:02.015252       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1002 07:18:02.015284       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1002 07:18:02.015315       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1002 07:18:02.015348       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1002 07:18:02.015382       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1002 07:18:02.015415       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1002 07:18:02.015448       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1002 07:18:02.015481       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1002 07:18:02.015512       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1002 07:18:02.015544       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1002 07:18:02.015575       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1002 07:18:02.033014       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:18:02.034577       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1002 07:18:02.035335       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1002 07:18:02.045748       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:18:02.056978       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1002 07:18:02.057010       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1002 07:18:02.057337       1 instance.go:239] Using reconciler: lease
	W1002 07:18:02.058416       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1002 07:18:22.032470       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1002 07:18:22.034569       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1002 07:18:22.058050       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8290015e8c15e01397448ee79ef46f66d0ddd62579c46b3fd334baf073a9d6bc] <==
	I1002 07:18:54.901508       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 07:18:54.914584       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 07:18:54.914862       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:18:54.917776       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:18:54.920456       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 07:18:54.921448       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:18:54.921690       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 07:18:54.935006       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:18:54.935120       1 policy_source.go:240] refreshing policies
	I1002 07:18:54.936177       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:18:54.995047       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:18:54.995073       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:18:55.006144       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 07:18:55.006401       1 aggregator.go:171] initial CRD sync complete...
	I1002 07:18:55.006443       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:18:55.006472       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:18:55.006502       1 cache.go:39] Caches are synced for autoregister controller
	I1002 07:18:55.693729       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:18:55.915859       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 07:18:56.852268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 07:18:56.854341       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:18:56.866097       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:19:00.445840       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:19:00.449414       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:19:00.588914       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0dcd791f01f43325da7d666b2308b7e9e8afd6c81f0dce7b635d6b6e5e8a9df1] <==
	I1002 07:19:00.422763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:19:00.422858       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 07:19:00.422891       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 07:19:00.429174       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 07:19:00.430239       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 07:19:00.434548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:19:00.434793       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 07:19:00.434939       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:19:00.434988       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:19:00.435000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:19:00.435011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:19:00.435027       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:19:00.436974       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:19:00.437153       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:19:00.437213       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:19:00.437246       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:19:00.437276       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:19:00.440308       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:19:00.441271       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 07:19:00.447203       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:19:00.447327       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:19:00.447774       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-550225-m04"
	I1002 07:19:50.432665       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-550225-m04"
	I1002 07:19:50.870389       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1002 07:24:50.586504       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-q95k5"
	
	
	==> kube-controller-manager [29394f92b6a368bb1845ecb24b6cebce9a3e6e6816e60bf240997292037f264a] <==
	I1002 07:18:16.059120       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:18:17.185952       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 07:18:17.185981       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:18:17.187402       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 07:18:17.187586       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 07:18:17.187839       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 07:18:17.187927       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:18:33.066017       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [5051222f30f0ae589e47ad3f24adc858d48fe99da320fc5495aa8189ecc36596] <==
	I1002 07:19:45.951789       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:19:46.028809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:19:46.129896       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:19:46.129933       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:19:46.130000       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:19:46.150308       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:19:46.150378       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:19:46.154018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:19:46.154343       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:19:46.154416       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:19:46.157478       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:19:46.157553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:19:46.157874       1 config.go:200] "Starting service config controller"
	I1002 07:19:46.157918       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:19:46.158250       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:19:46.158295       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:19:46.158742       1 config.go:309] "Starting node config controller"
	I1002 07:19:46.158794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:19:46.158824       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:19:46.258046       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:19:46.258051       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:19:46.258406       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2b4cd729501f68e709fb29b74cdf4d89db019e465f669755a276bbd13dfa365d] <==
	E1002 07:17:57.915557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:17:59.343245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:18:17.475604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:18:19.476430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:18:20.523426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:18:20.961075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:18:21.209835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:18:22.175039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:18:23.065717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33332->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:18:23.065828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33338->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:18:23.065904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33346->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:18:23.066085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33356->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:18:23.066195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48896->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:18:23.066285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33302->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:18:23.066377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33316->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:18:23.066451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33400->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:18:23.067303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33366->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:18:23.067355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48888->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:18:23.067419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48872->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:18:23.067516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:48892->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:18:23.067591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:33382->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:18:50.334725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:18:54.767637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:18:54.767804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 07:18:55.890008       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:19:21 ha-550225 kubelet[753]: E1002 07:19:21.811346     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(f74a25ae-35bd-44b0-84a9-50a5df5dec1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:21 ha-550225 kubelet[753]: E1002 07:19:21.811400     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="f74a25ae-35bd-44b0-84a9-50a5df5dec1d"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.810797     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-gph4b_default(193a390b-ce6f-4e39-afcc-7ee671deb0a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.810843     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-gph4b" podUID="193a390b-ce6f-4e39-afcc-7ee671deb0a1"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.811359     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-s6dq8_kube-system(7626557b-e8fe-419b-b447-994cfa9b0f07): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:22 ha-550225 kubelet[753]: E1002 07:19:22.811895     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-s6dq8" podUID="7626557b-e8fe-419b-b447-994cfa9b0f07"
	Oct 02 07:19:23 ha-550225 kubelet[753]: E1002 07:19:23.811789     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-v7wnc_kube-system(b011ceef-f3c8-4142-8385-b09113581770): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:23 ha-550225 kubelet[753]: E1002 07:19:23.811826     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-v7wnc" podUID="b011ceef-f3c8-4142-8385-b09113581770"
	Oct 02 07:19:24 ha-550225 kubelet[753]: E1002 07:19:24.810191     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-7gnh8_kube-system(55461d93-6678-4e2e-8b48-7d26628c1cf9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:24 ha-550225 kubelet[753]: E1002 07:19:24.810240     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-7gnh8" podUID="55461d93-6678-4e2e-8b48-7d26628c1cf9"
	Oct 02 07:19:31 ha-550225 kubelet[753]: E1002 07:19:31.812684     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-skqs2_kube-system(d5f2a06e-009a-4c94-aee4-c6d515d1a38b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:31 ha-550225 kubelet[753]: E1002 07:19:31.812750     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-skqs2" podUID="d5f2a06e-009a-4c94-aee4-c6d515d1a38b"
	Oct 02 07:19:32 ha-550225 kubelet[753]: E1002 07:19:32.810908     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(f74a25ae-35bd-44b0-84a9-50a5df5dec1d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:32 ha-550225 kubelet[753]: E1002 07:19:32.811030     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="f74a25ae-35bd-44b0-84a9-50a5df5dec1d"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812380     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-s6dq8_kube-system(7626557b-e8fe-419b-b447-994cfa9b0f07): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812427     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-s6dq8" podUID="7626557b-e8fe-419b-b447-994cfa9b0f07"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812402     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-gph4b_default(193a390b-ce6f-4e39-afcc-7ee671deb0a1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.812917     753 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-v7wnc_kube-system(b011ceef-f3c8-4142-8385-b09113581770): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.814141     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-v7wnc" podUID="b011ceef-f3c8-4142-8385-b09113581770"
	Oct 02 07:19:35 ha-550225 kubelet[753]: E1002 07:19:35.814168     753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-gph4b" podUID="193a390b-ce6f-4e39-afcc-7ee671deb0a1"
	Oct 02 07:19:51 ha-550225 kubelet[753]: E1002 07:19:51.724599     753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d\": container with ID starting with 15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d not found: ID does not exist" containerID="15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d"
	Oct 02 07:19:51 ha-550225 kubelet[753]: I1002 07:19:51.724702     753 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d" err="rpc error: code = NotFound desc = could not find container \"15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d\": container with ID starting with 15bf6c4aafdc326cf3653c80ae65fb5a8d4dbb8d46617b42a729519c2e934f0d not found: ID does not exist"
	Oct 02 07:19:51 ha-550225 kubelet[753]: E1002 07:19:51.725359     753 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04\": container with ID starting with c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04 not found: ID does not exist" containerID="c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04"
	Oct 02 07:19:51 ha-550225 kubelet[753]: I1002 07:19:51.725398     753 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04" err="rpc error: code = NotFound desc = could not find container \"c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04\": container with ID starting with c24ef121a842d4f978a2d38274a68effeda44bee809465ef5661b421eba91f04 not found: ID does not exist"
	Oct 02 07:20:16 ha-550225 kubelet[753]: I1002 07:20:16.460466     753 scope.go:117] "RemoveContainer" containerID="48fccb25ba33b3850afc1ffdf5ca13f71673b1d992497dbcadf93bdbc8bdee4c"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-550225 -n ha-550225
helpers_test.go:269: (dbg) Run:  kubectl --context ha-550225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-2x8th
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-550225 describe pod busybox-7b57f96db7-2x8th
helpers_test.go:290: (dbg) kubectl --context ha-550225 describe pod busybox-7b57f96db7-2x8th:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-2x8th
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q7r8b (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-q7r8b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  6s    default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (5.31s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-117474 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-117474 --output=json --user=testUser: exit status 80 (2.510104362s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b5ab548-a1b6-43cb-acf2-50ecb1ec0afc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-117474 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"57554239-e719-4060-b466-4868c48457e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-02T07:26:36Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"5e26092f-0140-4435-b338-aa11ebf422c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-117474 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-117474 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-117474 --output=json --user=testUser: exit status 80 (1.73723099s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"86749581-3a51-4d20-bdcc-46fed6fff086","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-117474 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"46cd00bb-b6e9-4d25-9faf-f0729c1fac5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-02T07:26:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b626c0d8-b52b-4d90-8608-754022f58a08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-117474 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.74s)

                                                
                                    
x
+
TestScheduledStopUnix (33.74s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-482322 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-482322 --memory=3072 --driver=docker  --container-runtime=crio: (29.170396756s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-482322 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-482322 -n scheduled-stop-482322
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-482322 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 428668 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-10-02 07:41:34.018987054 +0000 UTC m=+3614.365266620
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-482322
helpers_test.go:243: (dbg) docker inspect scheduled-stop-482322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c7662fd528cd414a6dd4cb409695d0787297eab55ab2c4251f004a0c0acf247",
	        "Created": "2025-10-02T07:41:09.817330012Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 426894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:41:09.882855448Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/6c7662fd528cd414a6dd4cb409695d0787297eab55ab2c4251f004a0c0acf247/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c7662fd528cd414a6dd4cb409695d0787297eab55ab2c4251f004a0c0acf247/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c7662fd528cd414a6dd4cb409695d0787297eab55ab2c4251f004a0c0acf247/hosts",
	        "LogPath": "/var/lib/docker/containers/6c7662fd528cd414a6dd4cb409695d0787297eab55ab2c4251f004a0c0acf247/6c7662fd528cd414a6dd4cb409695d0787297eab55ab2c4251f004a0c0acf247-json.log",
	        "Name": "/scheduled-stop-482322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-482322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-482322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c7662fd528cd414a6dd4cb409695d0787297eab55ab2c4251f004a0c0acf247",
	                "LowerDir": "/var/lib/docker/overlay2/23200fd9cabba66f7c3f58c280ea45825ca998cad7756be8f3a3eabfa96caaa4-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23200fd9cabba66f7c3f58c280ea45825ca998cad7756be8f3a3eabfa96caaa4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23200fd9cabba66f7c3f58c280ea45825ca998cad7756be8f3a3eabfa96caaa4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23200fd9cabba66f7c3f58c280ea45825ca998cad7756be8f3a3eabfa96caaa4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-482322",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-482322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-482322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-482322",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-482322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e4b91d24afc9ca988ff29ada06d3c7b4b9b02a600a71e4541c2a1f9c3e7968e",
	            "SandboxKey": "/var/run/docker/netns/7e4b91d24afc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33313"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33314"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33317"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33315"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-482322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:1a:12:bb:0d:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16c8b6b87a174b2b2d7b7164f7de3707757e2f99047ecf60cc58a284430966b7",
	                    "EndpointID": "b4281819701ccb60d46b1b212c31fdc5f16fbf299fde2de276fadd386dafb73d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-482322",
	                        "6c7662fd528c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-482322 -n scheduled-stop-482322
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-482322 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-482322 logs -n 25: (1.064692432s)
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-339784                                                                                                                                       │ multinode-339784      │ jenkins │ v1.37.0 │ 02 Oct 25 07:35 UTC │ 02 Oct 25 07:36 UTC │
	│ start   │ -p multinode-339784 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-339784      │ jenkins │ v1.37.0 │ 02 Oct 25 07:36 UTC │ 02 Oct 25 07:36 UTC │
	│ node    │ list -p multinode-339784                                                                                                                                  │ multinode-339784      │ jenkins │ v1.37.0 │ 02 Oct 25 07:36 UTC │                     │
	│ node    │ multinode-339784 node delete m03                                                                                                                          │ multinode-339784      │ jenkins │ v1.37.0 │ 02 Oct 25 07:36 UTC │ 02 Oct 25 07:37 UTC │
	│ stop    │ multinode-339784 stop                                                                                                                                     │ multinode-339784      │ jenkins │ v1.37.0 │ 02 Oct 25 07:37 UTC │ 02 Oct 25 07:37 UTC │
	│ start   │ -p multinode-339784 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-339784      │ jenkins │ v1.37.0 │ 02 Oct 25 07:37 UTC │ 02 Oct 25 07:38 UTC │
	│ node    │ list -p multinode-339784                                                                                                                                  │ multinode-339784      │ jenkins │ v1.37.0 │ 02 Oct 25 07:38 UTC │                     │
	│ start   │ -p multinode-339784-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-339784-m02  │ jenkins │ v1.37.0 │ 02 Oct 25 07:38 UTC │                     │
	│ start   │ -p multinode-339784-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-339784-m03  │ jenkins │ v1.37.0 │ 02 Oct 25 07:38 UTC │ 02 Oct 25 07:38 UTC │
	│ node    │ add -p multinode-339784                                                                                                                                   │ multinode-339784      │ jenkins │ v1.37.0 │ 02 Oct 25 07:38 UTC │                     │
	│ delete  │ -p multinode-339784-m03                                                                                                                                   │ multinode-339784-m03  │ jenkins │ v1.37.0 │ 02 Oct 25 07:38 UTC │ 02 Oct 25 07:38 UTC │
	│ delete  │ -p multinode-339784                                                                                                                                       │ multinode-339784      │ jenkins │ v1.37.0 │ 02 Oct 25 07:38 UTC │ 02 Oct 25 07:38 UTC │
	│ start   │ -p test-preload-897040 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-897040   │ jenkins │ v1.37.0 │ 02 Oct 25 07:38 UTC │ 02 Oct 25 07:39 UTC │
	│ image   │ test-preload-897040 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-897040   │ jenkins │ v1.37.0 │ 02 Oct 25 07:39 UTC │ 02 Oct 25 07:39 UTC │
	│ stop    │ -p test-preload-897040                                                                                                                                    │ test-preload-897040   │ jenkins │ v1.37.0 │ 02 Oct 25 07:39 UTC │ 02 Oct 25 07:40 UTC │
	│ start   │ -p test-preload-897040 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-897040   │ jenkins │ v1.37.0 │ 02 Oct 25 07:40 UTC │ 02 Oct 25 07:41 UTC │
	│ image   │ test-preload-897040 image list                                                                                                                            │ test-preload-897040   │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │ 02 Oct 25 07:41 UTC │
	│ delete  │ -p test-preload-897040                                                                                                                                    │ test-preload-897040   │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │ 02 Oct 25 07:41 UTC │
	│ start   │ -p scheduled-stop-482322 --memory=3072 --driver=docker  --container-runtime=crio                                                                          │ scheduled-stop-482322 │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │ 02 Oct 25 07:41 UTC │
	│ stop    │ -p scheduled-stop-482322 --schedule 5m                                                                                                                    │ scheduled-stop-482322 │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │                     │
	│ stop    │ -p scheduled-stop-482322 --schedule 5m                                                                                                                    │ scheduled-stop-482322 │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │                     │
	│ stop    │ -p scheduled-stop-482322 --schedule 5m                                                                                                                    │ scheduled-stop-482322 │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │                     │
	│ stop    │ -p scheduled-stop-482322 --schedule 15s                                                                                                                   │ scheduled-stop-482322 │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │                     │
	│ stop    │ -p scheduled-stop-482322 --schedule 15s                                                                                                                   │ scheduled-stop-482322 │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │                     │
	│ stop    │ -p scheduled-stop-482322 --schedule 15s                                                                                                                   │ scheduled-stop-482322 │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:41:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:41:04.372110  426502 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:41:04.372269  426502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:41:04.372273  426502 out.go:374] Setting ErrFile to fd 2...
	I1002 07:41:04.372278  426502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:41:04.372515  426502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:41:04.372927  426502 out.go:368] Setting JSON to false
	I1002 07:41:04.373746  426502 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8616,"bootTime":1759382249,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:41:04.373805  426502 start.go:140] virtualization:  
	I1002 07:41:04.377630  426502 out.go:179] * [scheduled-stop-482322] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:41:04.382096  426502 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:41:04.382211  426502 notify.go:220] Checking for updates...
	I1002 07:41:04.388758  426502 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:41:04.392052  426502 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:41:04.395219  426502 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:41:04.398149  426502 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:41:04.401123  426502 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:41:04.404380  426502 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:41:04.433155  426502 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:41:04.433270  426502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:41:04.495713  426502 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:41:04.486331732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:41:04.495811  426502 docker.go:318] overlay module found
	I1002 07:41:04.498997  426502 out.go:179] * Using the docker driver based on user configuration
	I1002 07:41:04.501988  426502 start.go:304] selected driver: docker
	I1002 07:41:04.501996  426502 start.go:924] validating driver "docker" against <nil>
	I1002 07:41:04.502008  426502 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:41:04.502771  426502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:41:04.559056  426502 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:41:04.549912659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:41:04.559232  426502 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 07:41:04.559490  426502 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 07:41:04.562465  426502 out.go:179] * Using Docker driver with root privileges
	I1002 07:41:04.565282  426502 cni.go:84] Creating CNI manager for ""
	I1002 07:41:04.565344  426502 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:41:04.565352  426502 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 07:41:04.565432  426502 start.go:348] cluster config:
	{Name:scheduled-stop-482322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-482322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:41:04.570392  426502 out.go:179] * Starting "scheduled-stop-482322" primary control-plane node in "scheduled-stop-482322" cluster
	I1002 07:41:04.573281  426502 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:41:04.576214  426502 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:41:04.579415  426502 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:41:04.579472  426502 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:41:04.579481  426502 cache.go:58] Caching tarball of preloaded images
	I1002 07:41:04.579529  426502 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:41:04.579586  426502 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:41:04.579596  426502 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:41:04.579962  426502 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/config.json ...
	I1002 07:41:04.579981  426502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/config.json: {Name:mk30fce2c3c25cc48987453ecfb225954df53474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:04.599039  426502 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:41:04.599054  426502 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:41:04.599115  426502 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:41:04.599137  426502 start.go:360] acquireMachinesLock for scheduled-stop-482322: {Name:mkad50c50a8898753212f489fb44a917d497e34f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:41:04.599253  426502 start.go:364] duration metric: took 99.759µs to acquireMachinesLock for "scheduled-stop-482322"
	I1002 07:41:04.599280  426502 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-482322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-482322 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:41:04.599349  426502 start.go:125] createHost starting for "" (driver="docker")
	I1002 07:41:04.602805  426502 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 07:41:04.603055  426502 start.go:159] libmachine.API.Create for "scheduled-stop-482322" (driver="docker")
	I1002 07:41:04.603123  426502 client.go:168] LocalClient.Create starting
	I1002 07:41:04.603204  426502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 07:41:04.603237  426502 main.go:141] libmachine: Decoding PEM data...
	I1002 07:41:04.603253  426502 main.go:141] libmachine: Parsing certificate...
	I1002 07:41:04.603310  426502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 07:41:04.603326  426502 main.go:141] libmachine: Decoding PEM data...
	I1002 07:41:04.603344  426502 main.go:141] libmachine: Parsing certificate...
	I1002 07:41:04.603703  426502 cli_runner.go:164] Run: docker network inspect scheduled-stop-482322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 07:41:04.620054  426502 cli_runner.go:211] docker network inspect scheduled-stop-482322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 07:41:04.620136  426502 network_create.go:284] running [docker network inspect scheduled-stop-482322] to gather additional debugging logs...
	I1002 07:41:04.620150  426502 cli_runner.go:164] Run: docker network inspect scheduled-stop-482322
	W1002 07:41:04.636809  426502 cli_runner.go:211] docker network inspect scheduled-stop-482322 returned with exit code 1
	I1002 07:41:04.636830  426502 network_create.go:287] error running [docker network inspect scheduled-stop-482322]: docker network inspect scheduled-stop-482322: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-482322 not found
	I1002 07:41:04.636841  426502 network_create.go:289] output of [docker network inspect scheduled-stop-482322]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-482322 not found
	
	** /stderr **
	I1002 07:41:04.636956  426502 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:41:04.654361  426502 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 07:41:04.654632  426502 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 07:41:04.654796  426502 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 07:41:04.655145  426502 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400196d910}
	I1002 07:41:04.655160  426502 network_create.go:124] attempt to create docker network scheduled-stop-482322 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 07:41:04.655226  426502 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-482322 scheduled-stop-482322
	I1002 07:41:04.712613  426502 network_create.go:108] docker network scheduled-stop-482322 192.168.76.0/24 created
	I1002 07:41:04.712635  426502 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-482322" container
	I1002 07:41:04.712707  426502 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 07:41:04.728410  426502 cli_runner.go:164] Run: docker volume create scheduled-stop-482322 --label name.minikube.sigs.k8s.io=scheduled-stop-482322 --label created_by.minikube.sigs.k8s.io=true
	I1002 07:41:04.751514  426502 oci.go:103] Successfully created a docker volume scheduled-stop-482322
	I1002 07:41:04.751590  426502 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-482322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-482322 --entrypoint /usr/bin/test -v scheduled-stop-482322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 07:41:05.290663  426502 oci.go:107] Successfully prepared a docker volume scheduled-stop-482322
	I1002 07:41:05.290715  426502 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:41:05.290735  426502 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 07:41:05.290813  426502 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-482322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 07:41:09.747558  426502 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-482322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.456709593s)
	I1002 07:41:09.747579  426502 kic.go:203] duration metric: took 4.456840917s to extract preloaded images to volume ...
	W1002 07:41:09.747731  426502 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 07:41:09.747837  426502 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 07:41:09.802002  426502 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-482322 --name scheduled-stop-482322 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-482322 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-482322 --network scheduled-stop-482322 --ip 192.168.76.2 --volume scheduled-stop-482322:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 07:41:10.143585  426502 cli_runner.go:164] Run: docker container inspect scheduled-stop-482322 --format={{.State.Running}}
	I1002 07:41:10.172862  426502 cli_runner.go:164] Run: docker container inspect scheduled-stop-482322 --format={{.State.Status}}
	I1002 07:41:10.198428  426502 cli_runner.go:164] Run: docker exec scheduled-stop-482322 stat /var/lib/dpkg/alternatives/iptables
	I1002 07:41:10.254722  426502 oci.go:144] the created container "scheduled-stop-482322" has a running status.
	I1002 07:41:10.254741  426502 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/scheduled-stop-482322/id_rsa...
	I1002 07:41:10.733723  426502 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/scheduled-stop-482322/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 07:41:10.766654  426502 cli_runner.go:164] Run: docker container inspect scheduled-stop-482322 --format={{.State.Status}}
	I1002 07:41:10.806057  426502 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 07:41:10.806068  426502 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-482322 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 07:41:10.855443  426502 cli_runner.go:164] Run: docker container inspect scheduled-stop-482322 --format={{.State.Status}}
	I1002 07:41:10.871949  426502 machine.go:93] provisionDockerMachine start ...
	I1002 07:41:10.872066  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:10.895177  426502 main.go:141] libmachine: Using SSH client type: native
	I1002 07:41:10.895521  426502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I1002 07:41:10.895528  426502 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:41:11.079218  426502 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-482322
	
	I1002 07:41:11.079233  426502 ubuntu.go:182] provisioning hostname "scheduled-stop-482322"
	I1002 07:41:11.079306  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:11.104450  426502 main.go:141] libmachine: Using SSH client type: native
	I1002 07:41:11.104751  426502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I1002 07:41:11.104760  426502 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-482322 && echo "scheduled-stop-482322" | sudo tee /etc/hostname
	I1002 07:41:11.261897  426502 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-482322
	
	I1002 07:41:11.261978  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:11.280613  426502 main.go:141] libmachine: Using SSH client type: native
	I1002 07:41:11.280908  426502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I1002 07:41:11.280928  426502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-482322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-482322/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-482322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:41:11.419361  426502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:41:11.419380  426502 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:41:11.419406  426502 ubuntu.go:190] setting up certificates
	I1002 07:41:11.419414  426502 provision.go:84] configureAuth start
	I1002 07:41:11.419475  426502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-482322
	I1002 07:41:11.438006  426502 provision.go:143] copyHostCerts
	I1002 07:41:11.438086  426502 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:41:11.438094  426502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:41:11.438172  426502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:41:11.438273  426502 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:41:11.438277  426502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:41:11.438300  426502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:41:11.438361  426502 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:41:11.438364  426502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:41:11.438386  426502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:41:11.438436  426502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-482322 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-482322]
	I1002 07:41:11.661623  426502 provision.go:177] copyRemoteCerts
	I1002 07:41:11.661678  426502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:41:11.661715  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:11.680369  426502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/scheduled-stop-482322/id_rsa Username:docker}
	I1002 07:41:11.775463  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:41:11.793983  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 07:41:11.814776  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:41:11.832912  426502 provision.go:87] duration metric: took 413.47169ms to configureAuth
	I1002 07:41:11.832931  426502 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:41:11.833119  426502 config.go:182] Loaded profile config "scheduled-stop-482322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:41:11.833226  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:11.850565  426502 main.go:141] libmachine: Using SSH client type: native
	I1002 07:41:11.850871  426502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33313 <nil> <nil>}
	I1002 07:41:11.850883  426502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:41:12.096869  426502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:41:12.096886  426502 machine.go:96] duration metric: took 1.224925745s to provisionDockerMachine
	I1002 07:41:12.096894  426502 client.go:171] duration metric: took 7.493765504s to LocalClient.Create
	I1002 07:41:12.096904  426502 start.go:167] duration metric: took 7.493849732s to libmachine.API.Create "scheduled-stop-482322"
	I1002 07:41:12.096909  426502 start.go:293] postStartSetup for "scheduled-stop-482322" (driver="docker")
	I1002 07:41:12.096918  426502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:41:12.096983  426502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:41:12.097036  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:12.114458  426502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/scheduled-stop-482322/id_rsa Username:docker}
	I1002 07:41:12.215407  426502 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:41:12.218683  426502 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:41:12.218704  426502 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:41:12.218714  426502 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:41:12.218768  426502 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:41:12.218840  426502 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:41:12.218938  426502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:41:12.226269  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:41:12.243425  426502 start.go:296] duration metric: took 146.501059ms for postStartSetup
	I1002 07:41:12.243781  426502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-482322
	I1002 07:41:12.260845  426502 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/config.json ...
	I1002 07:41:12.261119  426502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:41:12.261157  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:12.277799  426502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/scheduled-stop-482322/id_rsa Username:docker}
	I1002 07:41:12.367984  426502 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:41:12.372313  426502 start.go:128] duration metric: took 7.772948571s to createHost
	I1002 07:41:12.372329  426502 start.go:83] releasing machines lock for "scheduled-stop-482322", held for 7.773067251s
	I1002 07:41:12.372399  426502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-482322
	I1002 07:41:12.388719  426502 ssh_runner.go:195] Run: cat /version.json
	I1002 07:41:12.388764  426502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:41:12.388772  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:12.388858  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:12.411269  426502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/scheduled-stop-482322/id_rsa Username:docker}
	I1002 07:41:12.417676  426502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/scheduled-stop-482322/id_rsa Username:docker}
	I1002 07:41:12.599144  426502 ssh_runner.go:195] Run: systemctl --version
	I1002 07:41:12.605475  426502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:41:12.640228  426502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:41:12.644744  426502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:41:12.644817  426502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:41:12.672552  426502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 07:41:12.672577  426502 start.go:495] detecting cgroup driver to use...
	I1002 07:41:12.672611  426502 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:41:12.672681  426502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:41:12.690634  426502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:41:12.703370  426502 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:41:12.703424  426502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:41:12.721193  426502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:41:12.739706  426502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:41:12.863800  426502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:41:12.989122  426502 docker.go:234] disabling docker service ...
	I1002 07:41:12.989179  426502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:41:13.016725  426502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:41:13.030594  426502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:41:13.148558  426502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:41:13.271686  426502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:41:13.283937  426502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:41:13.297929  426502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:41:13.297998  426502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:41:13.306591  426502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:41:13.306660  426502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:41:13.315621  426502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:41:13.324080  426502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:41:13.332709  426502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:41:13.341132  426502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:41:13.349613  426502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:41:13.362988  426502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:41:13.371961  426502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:41:13.380219  426502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:41:13.387861  426502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:41:13.493835  426502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:41:13.621384  426502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:41:13.621456  426502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:41:13.625556  426502 start.go:563] Will wait 60s for crictl version
	I1002 07:41:13.625615  426502 ssh_runner.go:195] Run: which crictl
	I1002 07:41:13.629248  426502 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:41:13.656927  426502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:41:13.657001  426502 ssh_runner.go:195] Run: crio --version
	I1002 07:41:13.686478  426502 ssh_runner.go:195] Run: crio --version
	I1002 07:41:13.720680  426502 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:41:13.723530  426502 cli_runner.go:164] Run: docker network inspect scheduled-stop-482322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:41:13.739690  426502 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 07:41:13.743387  426502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:41:13.753089  426502 kubeadm.go:883] updating cluster {Name:scheduled-stop-482322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-482322 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:41:13.753200  426502 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:41:13.753256  426502 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:41:13.785911  426502 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:41:13.785923  426502 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:41:13.785977  426502 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:41:13.815756  426502 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:41:13.815771  426502 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:41:13.815778  426502 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 07:41:13.815862  426502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=scheduled-stop-482322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-482322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:41:13.815936  426502 ssh_runner.go:195] Run: crio config
	I1002 07:41:13.872216  426502 cni.go:84] Creating CNI manager for ""
	I1002 07:41:13.872227  426502 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:41:13.872245  426502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:41:13.872276  426502 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-482322 NodeName:scheduled-stop-482322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:41:13.872404  426502 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-482322"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:41:13.872488  426502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:41:13.880515  426502 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:41:13.880578  426502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:41:13.888972  426502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1002 07:41:13.902618  426502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:41:13.915977  426502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1002 07:41:13.928723  426502 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:41:13.932403  426502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:41:13.942227  426502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:41:14.063713  426502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:41:14.079822  426502 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322 for IP: 192.168.76.2
	I1002 07:41:14.079841  426502 certs.go:195] generating shared ca certs ...
	I1002 07:41:14.079889  426502 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:14.080288  426502 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:41:14.080401  426502 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:41:14.080413  426502 certs.go:257] generating profile certs ...
	I1002 07:41:14.080537  426502 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/client.key
	I1002 07:41:14.080558  426502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/client.crt with IP's: []
	I1002 07:41:14.402953  426502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/client.crt ...
	I1002 07:41:14.402970  426502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/client.crt: {Name:mk56edc5dbb3c5f0d8b7fd8041087f7bdcd67c6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:14.403192  426502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/client.key ...
	I1002 07:41:14.403202  426502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/client.key: {Name:mk5abf5209fc603bdc33b9bea79bbecbbbc5bb4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:14.404124  426502 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.key.e61186e3
	I1002 07:41:14.404138  426502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.crt.e61186e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 07:41:14.555264  426502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.crt.e61186e3 ...
	I1002 07:41:14.555281  426502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.crt.e61186e3: {Name:mkfe3ef65c0bd39b3a7724d3601ade4f23d9318d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:14.555473  426502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.key.e61186e3 ...
	I1002 07:41:14.555481  426502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.key.e61186e3: {Name:mkfbf793b97923f82efe26ff9b523262da2c6128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:14.555565  426502 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.crt.e61186e3 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.crt
	I1002 07:41:14.555646  426502 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.key.e61186e3 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.key
	I1002 07:41:14.555699  426502 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/proxy-client.key
	I1002 07:41:14.555710  426502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/proxy-client.crt with IP's: []
	I1002 07:41:14.760123  426502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/proxy-client.crt ...
	I1002 07:41:14.760139  426502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/proxy-client.crt: {Name:mkc9b7c2139f741c56389469870b0775d122222b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:14.760331  426502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/proxy-client.key ...
	I1002 07:41:14.760339  426502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/proxy-client.key: {Name:mk0a921c67f005e3bbf78d0f56d55a2a6e3e4765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:14.760516  426502 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:41:14.760554  426502 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:41:14.760599  426502 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:41:14.760622  426502 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:41:14.760645  426502 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:41:14.760668  426502 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:41:14.760711  426502 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:41:14.761276  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:41:14.783494  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:41:14.801369  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:41:14.818798  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:41:14.836707  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 07:41:14.854250  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:41:14.872258  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:41:14.890689  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/scheduled-stop-482322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:41:14.911067  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:41:14.930762  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:41:14.951130  426502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:41:14.968861  426502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:41:14.981789  426502 ssh_runner.go:195] Run: openssl version
	I1002 07:41:14.988092  426502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:41:14.996690  426502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:41:15.000376  426502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:41:15.000443  426502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:41:15.048596  426502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:41:15.058969  426502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:41:15.069575  426502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:41:15.074096  426502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:41:15.074161  426502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:41:15.117116  426502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:41:15.126410  426502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:41:15.135342  426502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:41:15.139783  426502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:41:15.139844  426502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:41:15.182781  426502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:41:15.191518  426502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:41:15.195132  426502 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 07:41:15.195188  426502 kubeadm.go:400] StartCluster: {Name:scheduled-stop-482322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-482322 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:41:15.195246  426502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:41:15.195347  426502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:41:15.223233  426502 cri.go:89] found id: ""
	I1002 07:41:15.223320  426502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:41:15.231574  426502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:41:15.239836  426502 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:41:15.239892  426502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:41:15.248223  426502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:41:15.248231  426502 kubeadm.go:157] found existing configuration files:
	
	I1002 07:41:15.248284  426502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:41:15.256337  426502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:41:15.256402  426502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:41:15.264125  426502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:41:15.272288  426502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:41:15.272350  426502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:41:15.280328  426502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:41:15.288970  426502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:41:15.289029  426502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:41:15.296664  426502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:41:15.304898  426502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:41:15.304957  426502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:41:15.312604  426502 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:41:15.352788  426502 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:41:15.352839  426502 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:41:15.379182  426502 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:41:15.379278  426502 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:41:15.379322  426502 kubeadm.go:318] OS: Linux
	I1002 07:41:15.379368  426502 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:41:15.379418  426502 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:41:15.379507  426502 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:41:15.379577  426502 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:41:15.379635  426502 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:41:15.379693  426502 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:41:15.379751  426502 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:41:15.379800  426502 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:41:15.379863  426502 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:41:15.453446  426502 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:41:15.453587  426502 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:41:15.453699  426502 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:41:15.461790  426502 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:41:15.468050  426502 out.go:252]   - Generating certificates and keys ...
	I1002 07:41:15.468162  426502 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:41:15.468249  426502 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:41:15.584812  426502 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 07:41:16.010198  426502 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 07:41:16.649377  426502 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 07:41:17.565090  426502 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 07:41:17.784203  426502 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 07:41:17.784379  426502 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-482322] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 07:41:18.757943  426502 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 07:41:18.758118  426502 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-482322] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 07:41:18.956547  426502 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 07:41:19.062595  426502 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 07:41:19.356454  426502 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 07:41:19.356690  426502 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:41:19.844909  426502 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:41:20.318184  426502 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:41:20.405468  426502 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:41:20.613265  426502 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:41:21.190537  426502 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:41:21.191308  426502 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:41:21.194126  426502 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:41:21.197704  426502 out.go:252]   - Booting up control plane ...
	I1002 07:41:21.197804  426502 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:41:21.197884  426502 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:41:21.197953  426502 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:41:21.212982  426502 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:41:21.213556  426502 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:41:21.223401  426502 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:41:21.223507  426502 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:41:21.223551  426502 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:41:21.353526  426502 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:41:21.353643  426502 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:41:23.354842  426502 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001779429s
	I1002 07:41:23.358733  426502 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:41:23.358827  426502 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 07:41:23.358919  426502 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:41:23.359000  426502 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:41:27.407916  426502 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.04873454s
	I1002 07:41:30.119411  426502 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.760590515s
	I1002 07:41:30.361243  426502 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002455917s
	I1002 07:41:30.381627  426502 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 07:41:30.395295  426502 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 07:41:30.410206  426502 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 07:41:30.410419  426502 kubeadm.go:318] [mark-control-plane] Marking the node scheduled-stop-482322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 07:41:30.426611  426502 kubeadm.go:318] [bootstrap-token] Using token: yquoev.bxvrot25hhvfvz8m
	I1002 07:41:30.429583  426502 out.go:252]   - Configuring RBAC rules ...
	I1002 07:41:30.429706  426502 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 07:41:30.436235  426502 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 07:41:30.444269  426502 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 07:41:30.448434  426502 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 07:41:30.452860  426502 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 07:41:30.457034  426502 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 07:41:30.768703  426502 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 07:41:31.240709  426502 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 07:41:31.768639  426502 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 07:41:31.769756  426502 kubeadm.go:318] 
	I1002 07:41:31.769826  426502 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 07:41:31.769830  426502 kubeadm.go:318] 
	I1002 07:41:31.769919  426502 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 07:41:31.769923  426502 kubeadm.go:318] 
	I1002 07:41:31.769949  426502 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 07:41:31.770013  426502 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 07:41:31.770065  426502 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 07:41:31.770069  426502 kubeadm.go:318] 
	I1002 07:41:31.770124  426502 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 07:41:31.770127  426502 kubeadm.go:318] 
	I1002 07:41:31.770175  426502 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 07:41:31.770179  426502 kubeadm.go:318] 
	I1002 07:41:31.770232  426502 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 07:41:31.770309  426502 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 07:41:31.770379  426502 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 07:41:31.770382  426502 kubeadm.go:318] 
	I1002 07:41:31.770468  426502 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 07:41:31.770548  426502 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 07:41:31.770551  426502 kubeadm.go:318] 
	I1002 07:41:31.770637  426502 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yquoev.bxvrot25hhvfvz8m \
	I1002 07:41:31.770744  426502 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 07:41:31.770764  426502 kubeadm.go:318] 	--control-plane 
	I1002 07:41:31.770767  426502 kubeadm.go:318] 
	I1002 07:41:31.770854  426502 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 07:41:31.770857  426502 kubeadm.go:318] 
	I1002 07:41:31.770942  426502 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yquoev.bxvrot25hhvfvz8m \
	I1002 07:41:31.771116  426502 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 07:41:31.775304  426502 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:41:31.775542  426502 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:41:31.775649  426502 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:41:31.775663  426502 cni.go:84] Creating CNI manager for ""
	I1002 07:41:31.775670  426502 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:41:31.778719  426502 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 07:41:31.781421  426502 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 07:41:31.785606  426502 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 07:41:31.785617  426502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 07:41:31.799348  426502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 07:41:32.099343  426502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 07:41:32.099474  426502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 07:41:32.099571  426502 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-482322 minikube.k8s.io/updated_at=2025_10_02T07_41_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=scheduled-stop-482322 minikube.k8s.io/primary=true
	I1002 07:41:32.245129  426502 ops.go:34] apiserver oom_adj: -16
	I1002 07:41:32.256867  426502 kubeadm.go:1113] duration metric: took 157.439722ms to wait for elevateKubeSystemPrivileges
	I1002 07:41:32.256887  426502 kubeadm.go:402] duration metric: took 17.061705654s to StartCluster
	I1002 07:41:32.256902  426502 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:32.256960  426502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:41:32.257586  426502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:41:32.257782  426502 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:41:32.257883  426502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 07:41:32.258092  426502 config.go:182] Loaded profile config "scheduled-stop-482322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:41:32.258122  426502 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:41:32.258220  426502 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-482322"
	I1002 07:41:32.258237  426502 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-482322"
	I1002 07:41:32.258257  426502 host.go:66] Checking if "scheduled-stop-482322" exists ...
	I1002 07:41:32.258762  426502 cli_runner.go:164] Run: docker container inspect scheduled-stop-482322 --format={{.State.Status}}
	I1002 07:41:32.258985  426502 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-482322"
	I1002 07:41:32.258999  426502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-482322"
	I1002 07:41:32.259323  426502 cli_runner.go:164] Run: docker container inspect scheduled-stop-482322 --format={{.State.Status}}
	I1002 07:41:32.262943  426502 out.go:179] * Verifying Kubernetes components...
	I1002 07:41:32.271546  426502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:41:32.288588  426502 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-482322"
	I1002 07:41:32.288618  426502 host.go:66] Checking if "scheduled-stop-482322" exists ...
	I1002 07:41:32.289049  426502 cli_runner.go:164] Run: docker container inspect scheduled-stop-482322 --format={{.State.Status}}
	I1002 07:41:32.310722  426502 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:41:32.313573  426502 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:41:32.313600  426502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:41:32.313666  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:32.336149  426502 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:41:32.336167  426502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:41:32.336226  426502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-482322
	I1002 07:41:32.350111  426502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/scheduled-stop-482322/id_rsa Username:docker}
	I1002 07:41:32.375826  426502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33313 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/scheduled-stop-482322/id_rsa Username:docker}
	I1002 07:41:32.578058  426502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 07:41:32.593788  426502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:41:32.610870  426502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:41:32.630149  426502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:41:32.906669  426502 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 07:41:32.908468  426502 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:41:32.908518  426502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:41:33.177022  426502 api_server.go:72] duration metric: took 919.215627ms to wait for apiserver process to appear ...
	I1002 07:41:33.177034  426502 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:41:33.177048  426502 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 07:41:33.179899  426502 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1002 07:41:33.183025  426502 addons.go:514] duration metric: took 924.880228ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 07:41:33.189982  426502 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 07:41:33.191365  426502 api_server.go:141] control plane version: v1.34.1
	I1002 07:41:33.191381  426502 api_server.go:131] duration metric: took 14.342282ms to wait for apiserver health ...
	I1002 07:41:33.191388  426502 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:41:33.194566  426502 system_pods.go:59] 5 kube-system pods found
	I1002 07:41:33.194587  426502 system_pods.go:61] "etcd-scheduled-stop-482322" [6b11fa9e-6f99-4d63-ba03-ce6934be9f62] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 07:41:33.194608  426502 system_pods.go:61] "kube-apiserver-scheduled-stop-482322" [ca38fbb8-5819-48cd-aace-f0ef5862ead9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:41:33.194616  426502 system_pods.go:61] "kube-controller-manager-scheduled-stop-482322" [b725a144-6abb-4934-94ae-7cccee2dbc3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:41:33.194625  426502 system_pods.go:61] "kube-scheduler-scheduled-stop-482322" [7c9de7d5-bac5-4191-8af8-08d7e4a9d48f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 07:41:33.194632  426502 system_pods.go:61] "storage-provisioner" [63b76b8d-27fc-48f4-96de-918387fee749] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 07:41:33.194638  426502 system_pods.go:74] duration metric: took 3.244325ms to wait for pod list to return data ...
	I1002 07:41:33.194649  426502 kubeadm.go:586] duration metric: took 936.847567ms to wait for: map[apiserver:true system_pods:true]
	I1002 07:41:33.194660  426502 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:41:33.197427  426502 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:41:33.197446  426502 node_conditions.go:123] node cpu capacity is 2
	I1002 07:41:33.197457  426502 node_conditions.go:105] duration metric: took 2.793802ms to run NodePressure ...
	I1002 07:41:33.197469  426502 start.go:241] waiting for startup goroutines ...
	I1002 07:41:33.410460  426502 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-482322" context rescaled to 1 replicas
	I1002 07:41:33.410497  426502 start.go:246] waiting for cluster config update ...
	I1002 07:41:33.410508  426502 start.go:255] writing updated cluster config ...
	I1002 07:41:33.410812  426502 ssh_runner.go:195] Run: rm -f paused
	I1002 07:41:33.468953  426502 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 07:41:33.472387  426502 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-482322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.578226989Z" level=info msg="Creating container: kube-system/etcd-scheduled-stop-482322/etcd" id=f48446d6-99c3-431d-9cd8-6af74680c626 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.579324893Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d87902f2-9a74-4e83-93af-e6a4d6eb93dc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.579689967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.5804103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.586910839Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.587568698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.59081482Z" level=info msg="Creating container: kube-system/kube-scheduler-scheduled-stop-482322/kube-scheduler" id=c04302e9-a621-4f1e-b33d-e6336ff45f1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.591280596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.596951212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.60508602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.606042893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.616983892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.617502838Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.64125128Z" level=info msg="Created container 17a89827a0d6a488288bced23500b02bbee0129c7b1e842ac1556cb0601cd262: kube-system/kube-apiserver-scheduled-stop-482322/kube-apiserver" id=35bac9c2-5b7e-4a79-b38d-5097d9faa07d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.642170426Z" level=info msg="Created container e815293a5339a00a93295e87644cb439aca75b58d059e5aa4aeea842fb7986dd: kube-system/etcd-scheduled-stop-482322/etcd" id=f48446d6-99c3-431d-9cd8-6af74680c626 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.647526077Z" level=info msg="Created container 706b925a41f2244c272a92206b183389cd3c1863ccc09b8a5a991ba177c6a68f: kube-system/kube-controller-manager-scheduled-stop-482322/kube-controller-manager" id=4ea9fa1c-0500-455b-b84c-d24113db6067 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.648674148Z" level=info msg="Starting container: e815293a5339a00a93295e87644cb439aca75b58d059e5aa4aeea842fb7986dd" id=8d33b819-65fe-401d-aeca-5ea258ec2028 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.651574281Z" level=info msg="Starting container: 706b925a41f2244c272a92206b183389cd3c1863ccc09b8a5a991ba177c6a68f" id=25767737-205a-4a8b-8d86-8fc4e9a23f44 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.654664808Z" level=info msg="Created container cc33acd1879a62fad4d6bbad5392285a747c0af34dd255301b5efdb2f66536a9: kube-system/kube-scheduler-scheduled-stop-482322/kube-scheduler" id=c04302e9-a621-4f1e-b33d-e6336ff45f1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.655668664Z" level=info msg="Starting container: cc33acd1879a62fad4d6bbad5392285a747c0af34dd255301b5efdb2f66536a9" id=2d2d357d-a867-44ae-a2ec-81238babef07 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.659497108Z" level=info msg="Started container" PID=1222 containerID=e815293a5339a00a93295e87644cb439aca75b58d059e5aa4aeea842fb7986dd description=kube-system/etcd-scheduled-stop-482322/etcd id=8d33b819-65fe-401d-aeca-5ea258ec2028 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15119513b9daf47fec3b27432e6ab3ade996d76d1f8178b98d77e2d65d62c646
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.663230174Z" level=info msg="Starting container: 17a89827a0d6a488288bced23500b02bbee0129c7b1e842ac1556cb0601cd262" id=c374a579-7636-4e31-899e-769291b515a5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.669835263Z" level=info msg="Started container" PID=1209 containerID=706b925a41f2244c272a92206b183389cd3c1863ccc09b8a5a991ba177c6a68f description=kube-system/kube-controller-manager-scheduled-stop-482322/kube-controller-manager id=25767737-205a-4a8b-8d86-8fc4e9a23f44 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a43d4d19807c4112d3b05f328343329c37ec278d4a4767a141b716db0de1191
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.670656734Z" level=info msg="Started container" PID=1228 containerID=cc33acd1879a62fad4d6bbad5392285a747c0af34dd255301b5efdb2f66536a9 description=kube-system/kube-scheduler-scheduled-stop-482322/kube-scheduler id=2d2d357d-a867-44ae-a2ec-81238babef07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=95ab6ac194571d08cc34dcf7d7b21428142b9e715c8a1471fd0d99d3746555a4
	Oct 02 07:41:23 scheduled-stop-482322 crio[839]: time="2025-10-02T07:41:23.67983292Z" level=info msg="Started container" PID=1213 containerID=17a89827a0d6a488288bced23500b02bbee0129c7b1e842ac1556cb0601cd262 description=kube-system/kube-apiserver-scheduled-stop-482322/kube-apiserver id=c374a579-7636-4e31-899e-769291b515a5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8955f7a65ac3fd8b6cf5012e65b33661043ff9c9095d0f7ba4ebc6ddf4445d6b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
	cc33acd1879a6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            0                   95ab6ac194571       kube-scheduler-scheduled-stop-482322            kube-system
	e815293a5339a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      0                   15119513b9daf       etcd-scheduled-stop-482322                      kube-system
	17a89827a0d6a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            0                   8955f7a65ac3f       kube-apiserver-scheduled-stop-482322            kube-system
	706b925a41f22       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   0                   4a43d4d19807c       kube-controller-manager-scheduled-stop-482322   kube-system
	
	
	==> describe nodes <==
	Name:               scheduled-stop-482322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-482322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=scheduled-stop-482322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_41_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:41:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-482322
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:41:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:41:31 +0000   Thu, 02 Oct 2025 07:41:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:41:31 +0000   Thu, 02 Oct 2025 07:41:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:41:31 +0000   Thu, 02 Oct 2025 07:41:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 07:41:31 +0000   Thu, 02 Oct 2025 07:41:23 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-482322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 67d9e05dc2eb450eb0596775beaf578b
	  System UUID:                2ce27f64-14ac-4f2b-a8ff-a18ca1258620
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-482322                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-482322             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-482322    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-482322             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From     Message
	  ----     ------                   ----               ----     -------
	  Normal   Starting                 12s                kubelet  Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet  Node scheduled-stop-482322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet  Node scheduled-stop-482322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet  Node scheduled-stop-482322 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s                 kubelet  Starting kubelet.
	  Warning  CgroupV1                 4s                 kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4s                 kubelet  Node scheduled-stop-482322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s                 kubelet  Node scheduled-stop-482322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s                 kubelet  Node scheduled-stop-482322 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Oct 2 06:49] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:03] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:16] overlayfs: idmapped layers are currently not supported
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:30] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:31] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e815293a5339a00a93295e87644cb439aca75b58d059e5aa4aeea842fb7986dd] <==
	{"level":"warn","ts":"2025-10-02T07:41:26.424463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.427000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.444396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.455218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.481221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.496437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.516326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.535756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.552935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.582660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.621525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.631797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.647621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.667015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.697796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.703695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.720407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.743635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.758009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.775668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.795813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.833707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.875755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:26.892269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:41:27.079748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33374","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:41:35 up  2:24,  0 user,  load average: 3.03, 2.10, 1.81
	Linux scheduled-stop-482322 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [17a89827a0d6a488288bced23500b02bbee0129c7b1e842ac1556cb0601cd262] <==
	I1002 07:41:28.148979       1 cache.go:39] Caches are synced for autoregister controller
	I1002 07:41:28.157355       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 07:41:28.175646       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:41:28.175710       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 07:41:28.203828       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 07:41:28.221803       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:41:28.221877       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 07:41:28.273095       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:41:28.273180       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 07:41:28.273527       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:41:28.273546       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:41:28.273838       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:41:28.832551       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 07:41:28.841573       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 07:41:28.841663       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:41:30.054525       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:41:30.173258       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:41:30.283150       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 07:41:30.291380       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 07:41:30.292542       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:41:30.298350       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:41:30.993353       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 07:41:31.202896       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:41:31.238947       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 07:41:31.252705       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [706b925a41f2244c272a92206b183389cd3c1863ccc09b8a5a991ba177c6a68f] <==
	I1002 07:41:34.847712       1 shared_informer.go:682] "Warning: resync period is smaller than resync check period and the informer has already started. Changing it to the resync check period" resyncPeriod="17h37m44.114181273s" resyncCheckPeriod="21h48m44.618863237s"
	I1002 07:41:34.847740       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1002 07:41:34.847755       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1002 07:41:34.847773       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1002 07:41:34.847797       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1002 07:41:34.847818       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1002 07:41:34.847837       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1002 07:41:34.847852       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1002 07:41:34.847866       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1002 07:41:34.847881       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1002 07:41:34.847895       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1002 07:41:34.847922       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1002 07:41:34.847940       1 controllermanager.go:781] "Started controller" controller="resourcequota-controller"
	I1002 07:41:34.848180       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1002 07:41:34.848191       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 07:41:34.848210       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1002 07:41:34.990233       1 controllermanager.go:781] "Started controller" controller="serviceaccount-controller"
	I1002 07:41:34.990293       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1002 07:41:34.990301       1 shared_informer.go:349] "Waiting for caches to sync" controller="service account"
	I1002 07:41:35.142285       1 controllermanager.go:781] "Started controller" controller="daemonset-controller"
	I1002 07:41:35.142394       1 daemon_controller.go:310] "Starting daemon sets controller" logger="daemonset-controller"
	I1002 07:41:35.142402       1 shared_informer.go:349] "Waiting for caches to sync" controller="daemon sets"
	I1002 07:41:35.290165       1 controllermanager.go:781] "Started controller" controller="cronjob-controller"
	I1002 07:41:35.290248       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1002 07:41:35.290256       1 shared_informer.go:349] "Waiting for caches to sync" controller="cronjob"
	
	
	==> kube-scheduler [cc33acd1879a62fad4d6bbad5392285a747c0af34dd255301b5efdb2f66536a9] <==
	I1002 07:41:28.029671       1 serving.go:386] Generated self-signed cert in-memory
	W1002 07:41:30.050579       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 07:41:30.050710       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 07:41:30.050751       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 07:41:30.050793       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 07:41:30.085106       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:41:30.085137       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:41:30.087941       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:41:30.088124       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:41:30.088142       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:41:30.088160       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:41:30.117499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 07:41:31.189414       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424176    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed1097c5dc5b62e283c4c15dc09a8d03-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-482322\" (UID: \"ed1097c5dc5b62e283c4c15dc09a8d03\") " pod="kube-system/kube-apiserver-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424217    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ca6c3fc7ceee2a2f0838749ffd2becb-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-482322\" (UID: \"6ca6c3fc7ceee2a2f0838749ffd2becb\") " pod="kube-system/kube-controller-manager-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424240    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ca6c3fc7ceee2a2f0838749ffd2becb-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-482322\" (UID: \"6ca6c3fc7ceee2a2f0838749ffd2becb\") " pod="kube-system/kube-controller-manager-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424262    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ca6c3fc7ceee2a2f0838749ffd2becb-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-482322\" (UID: \"6ca6c3fc7ceee2a2f0838749ffd2becb\") " pod="kube-system/kube-controller-manager-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424285    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/04566f918579f927beb474df6a031b03-etcd-certs\") pod \"etcd-scheduled-stop-482322\" (UID: \"04566f918579f927beb474df6a031b03\") " pod="kube-system/etcd-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424303    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed1097c5dc5b62e283c4c15dc09a8d03-k8s-certs\") pod \"kube-apiserver-scheduled-stop-482322\" (UID: \"ed1097c5dc5b62e283c4c15dc09a8d03\") " pod="kube-system/kube-apiserver-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424321    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed1097c5dc5b62e283c4c15dc09a8d03-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-482322\" (UID: \"ed1097c5dc5b62e283c4c15dc09a8d03\") " pod="kube-system/kube-apiserver-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424340    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed1097c5dc5b62e283c4c15dc09a8d03-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-482322\" (UID: \"ed1097c5dc5b62e283c4c15dc09a8d03\") " pod="kube-system/kube-apiserver-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424359    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ca6c3fc7ceee2a2f0838749ffd2becb-ca-certs\") pod \"kube-controller-manager-scheduled-stop-482322\" (UID: \"6ca6c3fc7ceee2a2f0838749ffd2becb\") " pod="kube-system/kube-controller-manager-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424386    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ca6c3fc7ceee2a2f0838749ffd2becb-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-482322\" (UID: \"6ca6c3fc7ceee2a2f0838749ffd2becb\") " pod="kube-system/kube-controller-manager-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424407    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5fb0ca64c94ac758d470c581a0659f7-kubeconfig\") pod \"kube-scheduler-scheduled-stop-482322\" (UID: \"d5fb0ca64c94ac758d470c581a0659f7\") " pod="kube-system/kube-scheduler-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424422    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/04566f918579f927beb474df6a031b03-etcd-data\") pod \"etcd-scheduled-stop-482322\" (UID: \"04566f918579f927beb474df6a031b03\") " pod="kube-system/etcd-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424439    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ca6c3fc7ceee2a2f0838749ffd2becb-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-482322\" (UID: \"6ca6c3fc7ceee2a2f0838749ffd2becb\") " pod="kube-system/kube-controller-manager-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.424456    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ca6c3fc7ceee2a2f0838749ffd2becb-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-482322\" (UID: \"6ca6c3fc7ceee2a2f0838749ffd2becb\") " pod="kube-system/kube-controller-manager-scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.452132    1282 kubelet_node_status.go:75] "Attempting to register node" node="scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.464443    1282 kubelet_node_status.go:124] "Node was previously registered" node="scheduled-stop-482322"
	Oct 02 07:41:31 scheduled-stop-482322 kubelet[1282]: I1002 07:41:31.464548    1282 kubelet_node_status.go:78] "Successfully registered node" node="scheduled-stop-482322"
	Oct 02 07:41:32 scheduled-stop-482322 kubelet[1282]: I1002 07:41:32.102694    1282 apiserver.go:52] "Watching apiserver"
	Oct 02 07:41:32 scheduled-stop-482322 kubelet[1282]: I1002 07:41:32.119333    1282 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 07:41:32 scheduled-stop-482322 kubelet[1282]: I1002 07:41:32.275550    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-482322" podStartSLOduration=1.275528517 podStartE2EDuration="1.275528517s" podCreationTimestamp="2025-10-02 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:41:32.249231787 +0000 UTC m=+1.246251863" watchObservedRunningTime="2025-10-02 07:41:32.275528517 +0000 UTC m=+1.272548592"
	Oct 02 07:41:32 scheduled-stop-482322 kubelet[1282]: I1002 07:41:32.298071    1282 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-482322"
	Oct 02 07:41:32 scheduled-stop-482322 kubelet[1282]: I1002 07:41:32.315784    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-482322" podStartSLOduration=1.315753888 podStartE2EDuration="1.315753888s" podCreationTimestamp="2025-10-02 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:41:32.276500052 +0000 UTC m=+1.273520136" watchObservedRunningTime="2025-10-02 07:41:32.315753888 +0000 UTC m=+1.312773972"
	Oct 02 07:41:32 scheduled-stop-482322 kubelet[1282]: E1002 07:41:32.323861    1282 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-482322\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-482322"
	Oct 02 07:41:32 scheduled-stop-482322 kubelet[1282]: I1002 07:41:32.376602    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-482322" podStartSLOduration=1.376530384 podStartE2EDuration="1.376530384s" podCreationTimestamp="2025-10-02 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:41:32.316185267 +0000 UTC m=+1.313205343" watchObservedRunningTime="2025-10-02 07:41:32.376530384 +0000 UTC m=+1.373550468"
	Oct 02 07:41:32 scheduled-stop-482322 kubelet[1282]: I1002 07:41:32.438928    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-482322" podStartSLOduration=1.43890961 podStartE2EDuration="1.43890961s" podCreationTimestamp="2025-10-02 07:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:41:32.376765734 +0000 UTC m=+1.373785892" watchObservedRunningTime="2025-10-02 07:41:32.43890961 +0000 UTC m=+1.435929694"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-482322 -n scheduled-stop-482322
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-482322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-482322 describe pod storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-482322 describe pod storage-provisioner: exit status 1 (107.115926ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-482322 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-482322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-482322
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-482322: (1.935543303s)
--- FAIL: TestScheduledStopUnix (33.74s)

                                                
                                    
x
+
TestPause/serial/Pause (7.14s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-422707 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-422707 --alsologtostderr -v=5: exit status 80 (2.351015825s)

                                                
                                                
-- stdout --
	* Pausing node pause-422707 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:47:34.808526  462056 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:47:34.809430  462056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:47:34.809487  462056 out.go:374] Setting ErrFile to fd 2...
	I1002 07:47:34.809508  462056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:47:34.809889  462056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:47:34.810308  462056 out.go:368] Setting JSON to false
	I1002 07:47:34.810363  462056 mustload.go:65] Loading cluster: pause-422707
	I1002 07:47:34.810841  462056 config.go:182] Loaded profile config "pause-422707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:47:34.811405  462056 cli_runner.go:164] Run: docker container inspect pause-422707 --format={{.State.Status}}
	I1002 07:47:34.833092  462056 host.go:66] Checking if "pause-422707" exists ...
	I1002 07:47:34.833406  462056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:47:34.894060  462056 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:47:34.883579761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:47:34.894740  462056 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-422707 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 07:47:34.899717  462056 out.go:179] * Pausing node pause-422707 ... 
	I1002 07:47:34.902639  462056 host.go:66] Checking if "pause-422707" exists ...
	I1002 07:47:34.902982  462056 ssh_runner.go:195] Run: systemctl --version
	I1002 07:47:34.903032  462056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:34.922245  462056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:35.019502  462056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:47:35.033683  462056 pause.go:51] kubelet running: true
	I1002 07:47:35.033751  462056 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 07:47:35.283247  462056 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 07:47:35.283332  462056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 07:47:35.353779  462056 cri.go:89] found id: "bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068"
	I1002 07:47:35.353804  462056 cri.go:89] found id: "d120fcee17433144b61042570d7426dbbea18ad38caae066f3c488e1d546fa5f"
	I1002 07:47:35.353810  462056 cri.go:89] found id: "de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5"
	I1002 07:47:35.353814  462056 cri.go:89] found id: "b8364eff63eb27502280c15e72f050b391d6c48bdc1e0b15e12b991cbe65b4e2"
	I1002 07:47:35.353817  462056 cri.go:89] found id: "7417b7c7f3bfda98962f017b5a0510c9c2693d339c94453d0849e7de2eb9d8d4"
	I1002 07:47:35.353821  462056 cri.go:89] found id: "f7bae3cd05925ab12ba039c66e40c1c68b06fd8f8c2effc0320d367c8336d488"
	I1002 07:47:35.353824  462056 cri.go:89] found id: "cdd11ede7258ff6809046b22ade252d706e70a12ce550aebbe4814c12e32f694"
	I1002 07:47:35.353827  462056 cri.go:89] found id: "7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2"
	I1002 07:47:35.353830  462056 cri.go:89] found id: "fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e"
	I1002 07:47:35.353836  462056 cri.go:89] found id: "4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b"
	I1002 07:47:35.353839  462056 cri.go:89] found id: "905cd7e5dfd7ea9891c435d909e83a9b93ede8e42ba50c4ca101e96e91b91bcd"
	I1002 07:47:35.353842  462056 cri.go:89] found id: "e1049b358ad259731384916f35ccf90b48b850267f7aed64a45d9db512a3a6d2"
	I1002 07:47:35.353845  462056 cri.go:89] found id: "36a0edc3f91c599e64798a3222fc111e434ab4a719442e7564de7ee2187ca26a"
	I1002 07:47:35.353849  462056 cri.go:89] found id: "bf6dbc138db362cfff432db0b771a54903e496cfb0cf5bd18097881ec91376c4"
	I1002 07:47:35.353853  462056 cri.go:89] found id: ""
	I1002 07:47:35.353908  462056 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 07:47:35.365190  462056 retry.go:31] will retry after 352.393707ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:47:35Z" level=error msg="open /run/runc: no such file or directory"
	I1002 07:47:35.717756  462056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:47:35.737052  462056 pause.go:51] kubelet running: false
	I1002 07:47:35.737119  462056 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 07:47:35.936820  462056 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 07:47:35.936926  462056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 07:47:36.003329  462056 cri.go:89] found id: "bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068"
	I1002 07:47:36.003362  462056 cri.go:89] found id: "d120fcee17433144b61042570d7426dbbea18ad38caae066f3c488e1d546fa5f"
	I1002 07:47:36.003367  462056 cri.go:89] found id: "de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5"
	I1002 07:47:36.003371  462056 cri.go:89] found id: "b8364eff63eb27502280c15e72f050b391d6c48bdc1e0b15e12b991cbe65b4e2"
	I1002 07:47:36.003375  462056 cri.go:89] found id: "7417b7c7f3bfda98962f017b5a0510c9c2693d339c94453d0849e7de2eb9d8d4"
	I1002 07:47:36.003380  462056 cri.go:89] found id: "f7bae3cd05925ab12ba039c66e40c1c68b06fd8f8c2effc0320d367c8336d488"
	I1002 07:47:36.003383  462056 cri.go:89] found id: "cdd11ede7258ff6809046b22ade252d706e70a12ce550aebbe4814c12e32f694"
	I1002 07:47:36.003386  462056 cri.go:89] found id: "7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2"
	I1002 07:47:36.003390  462056 cri.go:89] found id: "fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e"
	I1002 07:47:36.003411  462056 cri.go:89] found id: "4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b"
	I1002 07:47:36.003414  462056 cri.go:89] found id: "905cd7e5dfd7ea9891c435d909e83a9b93ede8e42ba50c4ca101e96e91b91bcd"
	I1002 07:47:36.003418  462056 cri.go:89] found id: "e1049b358ad259731384916f35ccf90b48b850267f7aed64a45d9db512a3a6d2"
	I1002 07:47:36.003421  462056 cri.go:89] found id: "36a0edc3f91c599e64798a3222fc111e434ab4a719442e7564de7ee2187ca26a"
	I1002 07:47:36.003424  462056 cri.go:89] found id: "bf6dbc138db362cfff432db0b771a54903e496cfb0cf5bd18097881ec91376c4"
	I1002 07:47:36.003427  462056 cri.go:89] found id: ""
	I1002 07:47:36.003488  462056 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 07:47:36.017570  462056 retry.go:31] will retry after 243.355663ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:47:36Z" level=error msg="open /run/runc: no such file or directory"
	I1002 07:47:36.262107  462056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:47:36.275953  462056 pause.go:51] kubelet running: false
	I1002 07:47:36.276043  462056 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 07:47:36.413796  462056 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 07:47:36.413877  462056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 07:47:36.481603  462056 cri.go:89] found id: "bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068"
	I1002 07:47:36.481633  462056 cri.go:89] found id: "d120fcee17433144b61042570d7426dbbea18ad38caae066f3c488e1d546fa5f"
	I1002 07:47:36.481639  462056 cri.go:89] found id: "de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5"
	I1002 07:47:36.481643  462056 cri.go:89] found id: "b8364eff63eb27502280c15e72f050b391d6c48bdc1e0b15e12b991cbe65b4e2"
	I1002 07:47:36.481646  462056 cri.go:89] found id: "7417b7c7f3bfda98962f017b5a0510c9c2693d339c94453d0849e7de2eb9d8d4"
	I1002 07:47:36.481650  462056 cri.go:89] found id: "f7bae3cd05925ab12ba039c66e40c1c68b06fd8f8c2effc0320d367c8336d488"
	I1002 07:47:36.481653  462056 cri.go:89] found id: "cdd11ede7258ff6809046b22ade252d706e70a12ce550aebbe4814c12e32f694"
	I1002 07:47:36.481671  462056 cri.go:89] found id: "7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2"
	I1002 07:47:36.481677  462056 cri.go:89] found id: "fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e"
	I1002 07:47:36.481689  462056 cri.go:89] found id: "4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b"
	I1002 07:47:36.481696  462056 cri.go:89] found id: "905cd7e5dfd7ea9891c435d909e83a9b93ede8e42ba50c4ca101e96e91b91bcd"
	I1002 07:47:36.481700  462056 cri.go:89] found id: "e1049b358ad259731384916f35ccf90b48b850267f7aed64a45d9db512a3a6d2"
	I1002 07:47:36.481703  462056 cri.go:89] found id: "36a0edc3f91c599e64798a3222fc111e434ab4a719442e7564de7ee2187ca26a"
	I1002 07:47:36.481708  462056 cri.go:89] found id: "bf6dbc138db362cfff432db0b771a54903e496cfb0cf5bd18097881ec91376c4"
	I1002 07:47:36.481711  462056 cri.go:89] found id: ""
	I1002 07:47:36.481761  462056 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 07:47:36.492690  462056 retry.go:31] will retry after 338.82077ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:47:36Z" level=error msg="open /run/runc: no such file or directory"
	I1002 07:47:36.832295  462056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:47:36.845631  462056 pause.go:51] kubelet running: false
	I1002 07:47:36.845699  462056 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 07:47:36.993581  462056 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 07:47:36.993663  462056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 07:47:37.072065  462056 cri.go:89] found id: "bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068"
	I1002 07:47:37.072092  462056 cri.go:89] found id: "d120fcee17433144b61042570d7426dbbea18ad38caae066f3c488e1d546fa5f"
	I1002 07:47:37.072099  462056 cri.go:89] found id: "de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5"
	I1002 07:47:37.072103  462056 cri.go:89] found id: "b8364eff63eb27502280c15e72f050b391d6c48bdc1e0b15e12b991cbe65b4e2"
	I1002 07:47:37.072106  462056 cri.go:89] found id: "7417b7c7f3bfda98962f017b5a0510c9c2693d339c94453d0849e7de2eb9d8d4"
	I1002 07:47:37.072110  462056 cri.go:89] found id: "f7bae3cd05925ab12ba039c66e40c1c68b06fd8f8c2effc0320d367c8336d488"
	I1002 07:47:37.072113  462056 cri.go:89] found id: "cdd11ede7258ff6809046b22ade252d706e70a12ce550aebbe4814c12e32f694"
	I1002 07:47:37.072116  462056 cri.go:89] found id: "7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2"
	I1002 07:47:37.072120  462056 cri.go:89] found id: "fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e"
	I1002 07:47:37.072126  462056 cri.go:89] found id: "4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b"
	I1002 07:47:37.072129  462056 cri.go:89] found id: "905cd7e5dfd7ea9891c435d909e83a9b93ede8e42ba50c4ca101e96e91b91bcd"
	I1002 07:47:37.072133  462056 cri.go:89] found id: "e1049b358ad259731384916f35ccf90b48b850267f7aed64a45d9db512a3a6d2"
	I1002 07:47:37.072136  462056 cri.go:89] found id: "36a0edc3f91c599e64798a3222fc111e434ab4a719442e7564de7ee2187ca26a"
	I1002 07:47:37.072140  462056 cri.go:89] found id: "bf6dbc138db362cfff432db0b771a54903e496cfb0cf5bd18097881ec91376c4"
	I1002 07:47:37.072144  462056 cri.go:89] found id: ""
	I1002 07:47:37.072207  462056 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 07:47:37.087622  462056 out.go:203] 
	W1002 07:47:37.090776  462056 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:47:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:47:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 07:47:37.090803  462056 out.go:285] * 
	* 
	W1002 07:47:37.096670  462056 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 07:47:37.101836  462056 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-422707 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-422707
helpers_test.go:243: (dbg) docker inspect pause-422707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b",
	        "Created": "2025-10-02T07:45:47.046314868Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 455826,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:45:47.114037347Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b/hosts",
	        "LogPath": "/var/lib/docker/containers/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b-json.log",
	        "Name": "/pause-422707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-422707:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-422707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b",
	                "LowerDir": "/var/lib/docker/overlay2/d2ac33d6bea0c6956c76633f936e852aadd17a3f2d6afe8077f7e0a8db132299-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2ac33d6bea0c6956c76633f936e852aadd17a3f2d6afe8077f7e0a8db132299/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2ac33d6bea0c6956c76633f936e852aadd17a3f2d6afe8077f7e0a8db132299/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2ac33d6bea0c6956c76633f936e852aadd17a3f2d6afe8077f7e0a8db132299/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-422707",
	                "Source": "/var/lib/docker/volumes/pause-422707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-422707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-422707",
	                "name.minikube.sigs.k8s.io": "pause-422707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "07a0969bc33fbba3fcd568d6e6238030debac6332c75c4058fba2cdea25bd6a2",
	            "SandboxKey": "/var/run/docker/netns/07a0969bc33f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33373"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33377"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33375"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33376"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-422707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:45:ed:24:da:c3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f690f11d2824d5c1d0d4b881867c9a0fa545f04fd81cf4a885ec314b2e8f033c",
	                    "EndpointID": "dcc04cedfcf239918562bd931d0a3a7c1c038a2ec793362f0481bd59a36e26c1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-422707",
	                        "7d708e6feb9f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-422707 -n pause-422707
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-422707 -n pause-422707: exit status 2 (327.077017ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-422707 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-422707 logs -n 25: (1.471537611s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-050176 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │ 02 Oct 25 07:42 UTC │
	│ start   │ -p missing-upgrade-857609 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-857609    │ jenkins │ v1.32.0 │ 02 Oct 25 07:41 UTC │ 02 Oct 25 07:42 UTC │
	│ start   │ -p NoKubernetes-050176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:42 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p missing-upgrade-857609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-857609    │ jenkins │ v1.37.0 │ 02 Oct 25 07:42 UTC │ 02 Oct 25 07:43 UTC │
	│ delete  │ -p NoKubernetes-050176                                                                                                                   │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p NoKubernetes-050176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ ssh     │ -p NoKubernetes-050176 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │                     │
	│ stop    │ -p NoKubernetes-050176                                                                                                                   │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p NoKubernetes-050176 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ ssh     │ -p NoKubernetes-050176 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │                     │
	│ delete  │ -p NoKubernetes-050176                                                                                                                   │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-011391 │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:44 UTC │
	│ delete  │ -p missing-upgrade-857609                                                                                                                │ missing-upgrade-857609    │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p stopped-upgrade-151473 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-151473    │ jenkins │ v1.32.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:44 UTC │
	│ stop    │ -p kubernetes-upgrade-011391                                                                                                             │ kubernetes-upgrade-011391 │ jenkins │ v1.37.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:44 UTC │
	│ start   │ -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-011391 │ jenkins │ v1.37.0 │ 02 Oct 25 07:44 UTC │                     │
	│ stop    │ stopped-upgrade-151473 stop                                                                                                              │ stopped-upgrade-151473    │ jenkins │ v1.32.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:44 UTC │
	│ start   │ -p stopped-upgrade-151473 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-151473    │ jenkins │ v1.37.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:44 UTC │
	│ delete  │ -p stopped-upgrade-151473                                                                                                                │ stopped-upgrade-151473    │ jenkins │ v1.37.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:44 UTC │
	│ start   │ -p running-upgrade-838161 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-838161    │ jenkins │ v1.32.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:45 UTC │
	│ start   │ -p running-upgrade-838161 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-838161    │ jenkins │ v1.37.0 │ 02 Oct 25 07:45 UTC │ 02 Oct 25 07:45 UTC │
	│ delete  │ -p running-upgrade-838161                                                                                                                │ running-upgrade-838161    │ jenkins │ v1.37.0 │ 02 Oct 25 07:45 UTC │ 02 Oct 25 07:45 UTC │
	│ start   │ -p pause-422707 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-422707              │ jenkins │ v1.37.0 │ 02 Oct 25 07:45 UTC │ 02 Oct 25 07:47 UTC │
	│ start   │ -p pause-422707 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-422707              │ jenkins │ v1.37.0 │ 02 Oct 25 07:47 UTC │ 02 Oct 25 07:47 UTC │
	│ pause   │ -p pause-422707 --alsologtostderr -v=5                                                                                                   │ pause-422707              │ jenkins │ v1.37.0 │ 02 Oct 25 07:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:47:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:47:04.374258  460071 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:47:04.374503  460071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:47:04.374534  460071 out.go:374] Setting ErrFile to fd 2...
	I1002 07:47:04.374555  460071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:47:04.374911  460071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:47:04.375398  460071 out.go:368] Setting JSON to false
	I1002 07:47:04.376519  460071 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8976,"bootTime":1759382249,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:47:04.376624  460071 start.go:140] virtualization:  
	I1002 07:47:04.380342  460071 out.go:179] * [pause-422707] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:47:04.383534  460071 notify.go:220] Checking for updates...
	I1002 07:47:04.386418  460071 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:47:04.389253  460071 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:47:04.393182  460071 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:47:04.396141  460071 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:47:04.399436  460071 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:47:04.402264  460071 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:47:04.405622  460071 config.go:182] Loaded profile config "pause-422707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:47:04.406193  460071 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:47:04.437397  460071 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:47:04.437576  460071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:47:04.543012  460071 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:47:04.532775282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:47:04.543200  460071 docker.go:318] overlay module found
	I1002 07:47:04.548265  460071 out.go:179] * Using the docker driver based on existing profile
	I1002 07:47:04.551190  460071 start.go:304] selected driver: docker
	I1002 07:47:04.551213  460071 start.go:924] validating driver "docker" against &{Name:pause-422707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:47:04.551352  460071 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:47:04.551464  460071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:47:04.654462  460071 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:47:04.6417758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:47:04.654925  460071 cni.go:84] Creating CNI manager for ""
	I1002 07:47:04.655001  460071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:47:04.655055  460071 start.go:348] cluster config:
	{Name:pause-422707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:47:04.660061  460071 out.go:179] * Starting "pause-422707" primary control-plane node in "pause-422707" cluster
	I1002 07:47:04.662890  460071 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:47:04.665829  460071 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:47:04.668679  460071 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:47:04.668746  460071 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:47:04.668746  460071 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:47:04.668757  460071 cache.go:58] Caching tarball of preloaded images
	I1002 07:47:04.668858  460071 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:47:04.668867  460071 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:47:04.669007  460071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/config.json ...
	I1002 07:47:04.691993  460071 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:47:04.692028  460071 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:47:04.692050  460071 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:47:04.692078  460071 start.go:360] acquireMachinesLock for pause-422707: {Name:mk8e831218cb50db533345363d2b05f8b5cf7cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:47:04.692148  460071 start.go:364] duration metric: took 43.348µs to acquireMachinesLock for "pause-422707"
	I1002 07:47:04.692170  460071 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:47:04.692187  460071 fix.go:54] fixHost starting: 
	I1002 07:47:04.692672  460071 cli_runner.go:164] Run: docker container inspect pause-422707 --format={{.State.Status}}
	I1002 07:47:04.719904  460071 fix.go:112] recreateIfNeeded on pause-422707: state=Running err=<nil>
	W1002 07:47:04.719933  460071 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:47:01.296466  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:01.306856  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:01.306928  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:01.335146  447344 cri.go:89] found id: ""
	I1002 07:47:01.335172  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.335181  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:01.335188  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:01.335251  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:01.377942  447344 cri.go:89] found id: ""
	I1002 07:47:01.377964  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.377973  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:01.377979  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:01.378036  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:01.410148  447344 cri.go:89] found id: ""
	I1002 07:47:01.410174  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.410184  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:01.410193  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:01.410298  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:01.449591  447344 cri.go:89] found id: ""
	I1002 07:47:01.449618  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.449628  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:01.449634  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:01.449702  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:01.477647  447344 cri.go:89] found id: ""
	I1002 07:47:01.477673  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.477690  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:01.477697  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:01.477763  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:01.513349  447344 cri.go:89] found id: ""
	I1002 07:47:01.513373  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.513391  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:01.513398  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:01.513454  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:01.540332  447344 cri.go:89] found id: ""
	I1002 07:47:01.540357  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.540367  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:01.540373  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:01.540435  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:01.570253  447344 cri.go:89] found id: ""
	I1002 07:47:01.570279  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.570289  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:01.570302  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:01.570313  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:01.689548  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:01.689588  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:01.707435  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:01.707469  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:01.782322  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:01.782345  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:01.782358  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:01.820211  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:01.820252  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:04.352012  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:04.364147  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:04.364223  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:04.398407  447344 cri.go:89] found id: ""
	I1002 07:47:04.398427  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.398435  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:04.398442  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:04.398503  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:04.439369  447344 cri.go:89] found id: ""
	I1002 07:47:04.439395  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.439404  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:04.439410  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:04.439472  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:04.482068  447344 cri.go:89] found id: ""
	I1002 07:47:04.482089  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.482098  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:04.482104  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:04.482173  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:04.518232  447344 cri.go:89] found id: ""
	I1002 07:47:04.518260  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.518270  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:04.518277  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:04.518335  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:04.555468  447344 cri.go:89] found id: ""
	I1002 07:47:04.555490  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.555499  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:04.555506  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:04.555566  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:04.590180  447344 cri.go:89] found id: ""
	I1002 07:47:04.590203  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.590212  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:04.590219  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:04.590282  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:04.636300  447344 cri.go:89] found id: ""
	I1002 07:47:04.636321  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.636330  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:04.636336  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:04.636399  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:04.672396  447344 cri.go:89] found id: ""
	I1002 07:47:04.672421  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.672430  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:04.672440  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:04.672452  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:04.817232  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:04.817303  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:04.841414  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:04.841493  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:04.926654  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:04.926680  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:04.926692  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:04.972059  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:04.972135  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:04.723047  460071 out.go:252] * Updating the running docker "pause-422707" container ...
	I1002 07:47:04.723318  460071 machine.go:93] provisionDockerMachine start ...
	I1002 07:47:04.723433  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:04.757828  460071 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:04.758269  460071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1002 07:47:04.758292  460071 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:47:04.899052  460071 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-422707
	
	I1002 07:47:04.899145  460071 ubuntu.go:182] provisioning hostname "pause-422707"
	I1002 07:47:04.899263  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:04.926325  460071 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:04.927273  460071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1002 07:47:04.927292  460071 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-422707 && echo "pause-422707" | sudo tee /etc/hostname
	I1002 07:47:05.089889  460071 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-422707
	
	I1002 07:47:05.089981  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:05.111271  460071 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:05.111600  460071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1002 07:47:05.111623  460071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-422707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-422707/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-422707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:47:05.248151  460071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:47:05.248189  460071 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:47:05.248210  460071 ubuntu.go:190] setting up certificates
	I1002 07:47:05.248220  460071 provision.go:84] configureAuth start
	I1002 07:47:05.248292  460071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-422707
	I1002 07:47:05.271931  460071 provision.go:143] copyHostCerts
	I1002 07:47:05.272000  460071 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:47:05.272022  460071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:47:05.272212  460071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:47:05.272331  460071 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:47:05.272344  460071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:47:05.272375  460071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:47:05.272444  460071 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:47:05.272454  460071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:47:05.272480  460071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:47:05.272539  460071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.pause-422707 san=[127.0.0.1 192.168.85.2 localhost minikube pause-422707]
	I1002 07:47:05.426935  460071 provision.go:177] copyRemoteCerts
	I1002 07:47:05.427005  460071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:47:05.427045  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:05.445580  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:05.543282  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:47:05.561945  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 07:47:05.580202  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:47:05.598609  460071 provision.go:87] duration metric: took 350.360326ms to configureAuth
	I1002 07:47:05.598635  460071 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:47:05.598885  460071 config.go:182] Loaded profile config "pause-422707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:47:05.599013  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:05.623398  460071 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:05.623721  460071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1002 07:47:05.623742  460071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:47:07.515692  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:07.526342  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:07.526416  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:07.551984  447344 cri.go:89] found id: ""
	I1002 07:47:07.552010  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.552021  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:07.552028  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:07.552085  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:07.576730  447344 cri.go:89] found id: ""
	I1002 07:47:07.576753  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.576763  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:07.576769  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:07.576833  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:07.602747  447344 cri.go:89] found id: ""
	I1002 07:47:07.602772  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.602788  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:07.602794  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:07.602855  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:07.630039  447344 cri.go:89] found id: ""
	I1002 07:47:07.630065  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.630075  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:07.630082  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:07.630147  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:07.656492  447344 cri.go:89] found id: ""
	I1002 07:47:07.656518  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.656528  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:07.656535  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:07.656595  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:07.682455  447344 cri.go:89] found id: ""
	I1002 07:47:07.682483  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.682493  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:07.682500  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:07.682561  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:07.710747  447344 cri.go:89] found id: ""
	I1002 07:47:07.710772  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.710790  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:07.710797  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:07.710856  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:07.736099  447344 cri.go:89] found id: ""
	I1002 07:47:07.736126  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.736135  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:07.736145  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:07.736157  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:07.847283  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:07.847321  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:07.863644  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:07.863731  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:07.935998  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:07.936019  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:07.936032  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:07.976790  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:07.976840  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:11.024908  460071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:47:11.024931  460071 machine.go:96] duration metric: took 6.301600949s to provisionDockerMachine
	I1002 07:47:11.024943  460071 start.go:293] postStartSetup for "pause-422707" (driver="docker")
	I1002 07:47:11.024954  460071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:47:11.025024  460071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:47:11.025063  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:11.048006  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:11.148666  460071 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:47:11.152523  460071 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:47:11.152554  460071 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:47:11.152566  460071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:47:11.152625  460071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:47:11.152713  460071 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:47:11.152822  460071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:47:11.161342  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:47:11.180692  460071 start.go:296] duration metric: took 155.732643ms for postStartSetup
	I1002 07:47:11.180792  460071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:47:11.180840  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:11.198919  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:11.292406  460071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:47:11.297725  460071 fix.go:56] duration metric: took 6.605535897s for fixHost
	I1002 07:47:11.297753  460071 start.go:83] releasing machines lock for "pause-422707", held for 6.605593982s
	I1002 07:47:11.297824  460071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-422707
	I1002 07:47:11.314915  460071 ssh_runner.go:195] Run: cat /version.json
	I1002 07:47:11.314934  460071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:47:11.314967  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:11.314997  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:11.336948  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:11.339707  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:11.515965  460071 ssh_runner.go:195] Run: systemctl --version
	I1002 07:47:11.522457  460071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:47:11.572359  460071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:47:11.576896  460071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:47:11.576976  460071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:47:11.584761  460071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:47:11.584783  460071 start.go:495] detecting cgroup driver to use...
	I1002 07:47:11.584815  460071 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:47:11.584861  460071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:47:11.600150  460071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:47:11.614242  460071 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:47:11.614334  460071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:47:11.630623  460071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:47:11.644410  460071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:47:11.786533  460071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:47:11.945574  460071 docker.go:234] disabling docker service ...
	I1002 07:47:11.945639  460071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:47:11.961726  460071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:47:11.976646  460071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:47:12.116768  460071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:47:12.256912  460071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:47:12.270192  460071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:47:12.285234  460071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:47:12.285309  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.295062  460071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:47:12.295190  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.305321  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.314210  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.323217  460071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:47:12.332614  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.341307  460071 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.350016  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.359060  460071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:47:12.366908  460071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:47:12.374653  460071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:47:12.510486  460071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:47:12.686286  460071 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:47:12.686379  460071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:47:12.690175  460071 start.go:563] Will wait 60s for crictl version
	I1002 07:47:12.690286  460071 ssh_runner.go:195] Run: which crictl
	I1002 07:47:12.693986  460071 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:47:12.719439  460071 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:47:12.719581  460071 ssh_runner.go:195] Run: crio --version
	I1002 07:47:12.748406  460071 ssh_runner.go:195] Run: crio --version
	I1002 07:47:12.781747  460071 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:47:12.784833  460071 cli_runner.go:164] Run: docker network inspect pause-422707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:47:12.800384  460071 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 07:47:12.804384  460071 kubeadm.go:883] updating cluster {Name:pause-422707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:47:12.804535  460071 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:47:12.804594  460071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:47:12.838049  460071 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:47:12.838078  460071 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:47:12.838135  460071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:47:12.868696  460071 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:47:12.868721  460071 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:47:12.868729  460071 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 07:47:12.868849  460071 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-422707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:47:12.868937  460071 ssh_runner.go:195] Run: crio config
	I1002 07:47:12.934263  460071 cni.go:84] Creating CNI manager for ""
	I1002 07:47:12.934297  460071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:47:12.934316  460071 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:47:12.934370  460071 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-422707 NodeName:pause-422707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:47:12.934546  460071 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-422707"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:47:12.934635  460071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:47:12.943448  460071 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:47:12.943543  460071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:47:12.951140  460071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 07:47:12.964030  460071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:47:12.978548  460071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1002 07:47:12.991826  460071 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:47:12.995804  460071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:47:13.129338  460071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:47:13.144304  460071 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707 for IP: 192.168.85.2
	I1002 07:47:13.144325  460071 certs.go:195] generating shared ca certs ...
	I1002 07:47:13.144341  460071 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:13.144474  460071 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:47:13.144521  460071 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:47:13.144532  460071 certs.go:257] generating profile certs ...
	I1002 07:47:13.144632  460071 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.key
	I1002 07:47:13.144700  460071 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/apiserver.key.b8eed788
	I1002 07:47:13.144752  460071 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/proxy-client.key
	I1002 07:47:13.144859  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:47:13.144889  460071 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:47:13.144904  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:47:13.144930  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:47:13.144960  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:47:13.144984  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:47:13.145031  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:47:13.145631  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:47:13.165041  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:47:13.183626  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:47:13.201522  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:47:13.220053  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 07:47:13.237598  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:47:13.254061  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:47:13.270958  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:47:13.288223  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:47:13.308498  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:47:13.325984  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:47:13.345088  460071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:47:13.358327  460071 ssh_runner.go:195] Run: openssl version
	I1002 07:47:13.364613  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:47:13.372763  460071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:47:13.376940  460071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:47:13.377013  460071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:47:13.420567  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:47:13.428980  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:47:13.437900  460071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:47:13.442024  460071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:47:13.442096  460071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:47:13.484152  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:47:13.492860  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:47:13.501456  460071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:47:13.505665  460071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:47:13.505741  460071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:47:13.547327  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:47:13.555747  460071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:47:13.560508  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:47:13.604165  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:47:13.649056  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:47:13.691734  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:47:13.735455  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:47:13.779364  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:47:13.823035  460071 kubeadm.go:400] StartCluster: {Name:pause-422707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:47:13.823252  460071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:47:13.823357  460071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:47:13.866864  460071 cri.go:89] found id: "7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2"
	I1002 07:47:13.866938  460071 cri.go:89] found id: "fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e"
	I1002 07:47:13.866959  460071 cri.go:89] found id: "4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b"
	I1002 07:47:13.866979  460071 cri.go:89] found id: "905cd7e5dfd7ea9891c435d909e83a9b93ede8e42ba50c4ca101e96e91b91bcd"
	I1002 07:47:13.867017  460071 cri.go:89] found id: "e1049b358ad259731384916f35ccf90b48b850267f7aed64a45d9db512a3a6d2"
	I1002 07:47:13.867041  460071 cri.go:89] found id: "36a0edc3f91c599e64798a3222fc111e434ab4a719442e7564de7ee2187ca26a"
	I1002 07:47:13.867063  460071 cri.go:89] found id: "bf6dbc138db362cfff432db0b771a54903e496cfb0cf5bd18097881ec91376c4"
	I1002 07:47:13.867183  460071 cri.go:89] found id: ""
	I1002 07:47:13.867280  460071 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 07:47:13.880015  460071 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:47:13Z" level=error msg="open /run/runc: no such file or directory"
	I1002 07:47:13.880169  460071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:47:13.892241  460071 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:47:13.892309  460071 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:47:13.892401  460071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:47:13.902318  460071 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:47:13.903225  460071 kubeconfig.go:125] found "pause-422707" server: "https://192.168.85.2:8443"
	I1002 07:47:13.904575  460071 kapi.go:59] client config for pause-422707: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:47:13.905983  460071 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:47:13.906042  460071 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:47:13.906066  460071 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:47:13.906250  460071 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:47:13.906297  460071 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:47:13.906994  460071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:47:13.921592  460071 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 07:47:13.921627  460071 kubeadm.go:601] duration metric: took 29.297891ms to restartPrimaryControlPlane
	I1002 07:47:13.921636  460071 kubeadm.go:402] duration metric: took 98.612057ms to StartCluster
	I1002 07:47:13.921652  460071 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:13.921730  460071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:47:13.922575  460071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:13.922835  460071 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:47:13.923052  460071 config.go:182] Loaded profile config "pause-422707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:47:13.923106  460071 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:47:13.927663  460071 out.go:179] * Enabled addons: 
	I1002 07:47:13.927775  460071 out.go:179] * Verifying Kubernetes components...
	I1002 07:47:13.931438  460071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:47:13.931894  460071 addons.go:514] duration metric: took 8.777412ms for enable addons: enabled=[]
	I1002 07:47:14.112820  460071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:47:14.129394  460071 node_ready.go:35] waiting up to 6m0s for node "pause-422707" to be "Ready" ...
	I1002 07:47:10.509542  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:10.519913  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:10.519986  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:10.548547  447344 cri.go:89] found id: ""
	I1002 07:47:10.548576  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.548586  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:10.548593  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:10.548652  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:10.573108  447344 cri.go:89] found id: ""
	I1002 07:47:10.573136  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.573146  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:10.573152  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:10.573210  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:10.598730  447344 cri.go:89] found id: ""
	I1002 07:47:10.598753  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.598762  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:10.598768  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:10.598840  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:10.625353  447344 cri.go:89] found id: ""
	I1002 07:47:10.625380  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.625390  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:10.625396  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:10.625501  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:10.651756  447344 cri.go:89] found id: ""
	I1002 07:47:10.651781  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.651791  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:10.651798  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:10.651856  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:10.678025  447344 cri.go:89] found id: ""
	I1002 07:47:10.678056  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.678065  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:10.678072  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:10.678140  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:10.705869  447344 cri.go:89] found id: ""
	I1002 07:47:10.705895  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.705904  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:10.705910  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:10.705983  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:10.731672  447344 cri.go:89] found id: ""
	I1002 07:47:10.731694  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.731703  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:10.731727  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:10.731745  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:10.800873  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:10.800899  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:10.800916  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:10.841843  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:10.841926  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:10.879486  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:10.879566  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:11.007003  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:11.007131  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:13.536958  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:13.548915  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:13.548987  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:13.593431  447344 cri.go:89] found id: ""
	I1002 07:47:13.593459  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.593468  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:13.593475  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:13.593535  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:13.630994  447344 cri.go:89] found id: ""
	I1002 07:47:13.631021  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.631030  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:13.631037  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:13.631140  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:13.669485  447344 cri.go:89] found id: ""
	I1002 07:47:13.669514  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.669523  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:13.669530  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:13.669589  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:13.697680  447344 cri.go:89] found id: ""
	I1002 07:47:13.697721  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.697730  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:13.697737  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:13.697804  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:13.735944  447344 cri.go:89] found id: ""
	I1002 07:47:13.735970  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.735980  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:13.735987  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:13.736043  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:13.769782  447344 cri.go:89] found id: ""
	I1002 07:47:13.769811  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.769820  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:13.769827  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:13.769888  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:13.809661  447344 cri.go:89] found id: ""
	I1002 07:47:13.809690  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.809699  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:13.809706  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:13.809766  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:13.840213  447344 cri.go:89] found id: ""
	I1002 07:47:13.840242  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.840250  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:13.840259  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:13.840274  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:13.876171  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:13.876203  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:14.019615  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:14.019722  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:14.041302  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:14.041340  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:14.122048  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:14.122070  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:14.122083  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:16.661906  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:16.680850  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:16.680920  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:16.733530  447344 cri.go:89] found id: ""
	I1002 07:47:16.733553  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.733561  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:16.733568  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:16.733625  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:16.780177  447344 cri.go:89] found id: ""
	I1002 07:47:16.780201  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.780211  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:16.780217  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:16.780275  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:16.821276  447344 cri.go:89] found id: ""
	I1002 07:47:16.821303  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.821313  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:16.821319  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:16.821378  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:16.882463  447344 cri.go:89] found id: ""
	I1002 07:47:16.882488  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.882497  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:16.882507  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:16.882564  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:16.928483  447344 cri.go:89] found id: ""
	I1002 07:47:16.928510  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.928520  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:16.928526  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:16.928584  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:16.978194  447344 cri.go:89] found id: ""
	I1002 07:47:16.978221  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.978231  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:16.978237  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:16.978303  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:17.020494  447344 cri.go:89] found id: ""
	I1002 07:47:17.020521  447344 logs.go:282] 0 containers: []
	W1002 07:47:17.020531  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:17.020538  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:17.020595  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:17.074138  447344 cri.go:89] found id: ""
	I1002 07:47:17.074163  447344 logs.go:282] 0 containers: []
	W1002 07:47:17.074172  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:17.074181  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:17.074192  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:17.118906  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:17.118944  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:17.159763  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:17.159787  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:17.304020  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:17.304100  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:17.340815  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:17.340895  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:17.436310  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:19.936552  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:19.953508  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:19.953580  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:20.009173  447344 cri.go:89] found id: ""
	I1002 07:47:20.009198  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.009207  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:20.009216  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:20.009284  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:20.061337  447344 cri.go:89] found id: ""
	I1002 07:47:20.061435  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.061464  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:20.061472  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:20.061545  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:20.128576  447344 cri.go:89] found id: ""
	I1002 07:47:20.128653  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.128678  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:20.128704  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:20.128804  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:20.181519  447344 cri.go:89] found id: ""
	I1002 07:47:20.181541  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.181549  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:20.181556  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:20.181621  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:20.232491  447344 cri.go:89] found id: ""
	I1002 07:47:20.232517  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.232526  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:20.232532  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:20.232596  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:20.281996  447344 cri.go:89] found id: ""
	I1002 07:47:20.282022  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.282032  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:20.282039  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:20.282101  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:20.331530  447344 cri.go:89] found id: ""
	I1002 07:47:20.331555  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.331564  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:20.331570  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:20.331629  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:20.868981  460071 node_ready.go:49] node "pause-422707" is "Ready"
	I1002 07:47:20.869009  460071 node_ready.go:38] duration metric: took 6.739578461s for node "pause-422707" to be "Ready" ...
	I1002 07:47:20.869023  460071 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:47:20.869086  460071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:20.888531  460071 api_server.go:72] duration metric: took 6.965661497s to wait for apiserver process to appear ...
	I1002 07:47:20.888552  460071 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:47:20.888571  460071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 07:47:20.923027  460071 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 07:47:20.923167  460071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 07:47:21.388682  460071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 07:47:21.404430  460071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 07:47:21.404509  460071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 07:47:21.888688  460071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 07:47:21.921815  460071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 07:47:21.921939  460071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 07:47:22.389386  460071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 07:47:22.397644  460071 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 07:47:22.398664  460071 api_server.go:141] control plane version: v1.34.1
	I1002 07:47:22.398690  460071 api_server.go:131] duration metric: took 1.510130462s to wait for apiserver health ...
	I1002 07:47:22.398699  460071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:47:22.402179  460071 system_pods.go:59] 7 kube-system pods found
	I1002 07:47:22.402223  460071 system_pods.go:61] "coredns-66bc5c9577-5fglk" [db096af0-568e-459a-b2a9-3139e8957c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 07:47:22.402261  460071 system_pods.go:61] "etcd-pause-422707" [267a51be-e04f-4dfa-9823-8af325902dea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 07:47:22.402274  460071 system_pods.go:61] "kindnet-gkbbj" [409e91ec-a4dc-47dd-9b39-6ddf23e0dad3] Running
	I1002 07:47:22.402282  460071 system_pods.go:61] "kube-apiserver-pause-422707" [05e165a9-496a-41b2-8873-5cb063b782df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:47:22.402290  460071 system_pods.go:61] "kube-controller-manager-pause-422707" [5c0db4fc-3582-4549-9edc-87ec0afa87e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:47:22.402300  460071 system_pods.go:61] "kube-proxy-mjj7w" [e1cddb37-a181-4f8e-b71c-e8240c6269c6] Running
	I1002 07:47:22.402323  460071 system_pods.go:61] "kube-scheduler-pause-422707" [6f937270-8ecd-49e0-b66d-85abf2c29010] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 07:47:22.402333  460071 system_pods.go:74] duration metric: took 3.627141ms to wait for pod list to return data ...
	I1002 07:47:22.402345  460071 default_sa.go:34] waiting for default service account to be created ...
	I1002 07:47:22.405306  460071 default_sa.go:45] found service account: "default"
	I1002 07:47:22.405334  460071 default_sa.go:55] duration metric: took 2.982984ms for default service account to be created ...
	I1002 07:47:22.405344  460071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 07:47:22.408691  460071 system_pods.go:86] 7 kube-system pods found
	I1002 07:47:22.408729  460071 system_pods.go:89] "coredns-66bc5c9577-5fglk" [db096af0-568e-459a-b2a9-3139e8957c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 07:47:22.408738  460071 system_pods.go:89] "etcd-pause-422707" [267a51be-e04f-4dfa-9823-8af325902dea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 07:47:22.408775  460071 system_pods.go:89] "kindnet-gkbbj" [409e91ec-a4dc-47dd-9b39-6ddf23e0dad3] Running
	I1002 07:47:22.408785  460071 system_pods.go:89] "kube-apiserver-pause-422707" [05e165a9-496a-41b2-8873-5cb063b782df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:47:22.408797  460071 system_pods.go:89] "kube-controller-manager-pause-422707" [5c0db4fc-3582-4549-9edc-87ec0afa87e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:47:22.408801  460071 system_pods.go:89] "kube-proxy-mjj7w" [e1cddb37-a181-4f8e-b71c-e8240c6269c6] Running
	I1002 07:47:22.408813  460071 system_pods.go:89] "kube-scheduler-pause-422707" [6f937270-8ecd-49e0-b66d-85abf2c29010] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 07:47:22.408820  460071 system_pods.go:126] duration metric: took 3.470356ms to wait for k8s-apps to be running ...
	I1002 07:47:22.408857  460071 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 07:47:22.408920  460071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:47:22.422284  460071 system_svc.go:56] duration metric: took 13.429481ms WaitForService to wait for kubelet
	I1002 07:47:22.422314  460071 kubeadm.go:586] duration metric: took 8.499450115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:47:22.422335  460071 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:47:22.425612  460071 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:47:22.425646  460071 node_conditions.go:123] node cpu capacity is 2
	I1002 07:47:22.425660  460071 node_conditions.go:105] duration metric: took 3.319331ms to run NodePressure ...
	I1002 07:47:22.425674  460071 start.go:241] waiting for startup goroutines ...
	I1002 07:47:22.425681  460071 start.go:246] waiting for cluster config update ...
	I1002 07:47:22.425690  460071 start.go:255] writing updated cluster config ...
	I1002 07:47:22.426034  460071 ssh_runner.go:195] Run: rm -f paused
	I1002 07:47:22.429749  460071 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:47:22.430372  460071 kapi.go:59] client config for pause-422707: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:47:22.434545  460071 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5fglk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:20.377299  447344 cri.go:89] found id: ""
	I1002 07:47:20.377369  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.377392  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:20.377419  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:20.377464  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:20.549009  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:20.549047  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:20.566218  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:20.566246  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:20.683290  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:20.683363  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:20.683381  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:20.734581  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:20.734677  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:23.282536  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:23.292424  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:23.292494  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:23.324997  447344 cri.go:89] found id: ""
	I1002 07:47:23.325022  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.325037  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:23.325047  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:23.325110  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:23.350463  447344 cri.go:89] found id: ""
	I1002 07:47:23.350490  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.350500  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:23.350506  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:23.350564  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:23.382900  447344 cri.go:89] found id: ""
	I1002 07:47:23.382931  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.382946  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:23.382952  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:23.383028  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:23.413972  447344 cri.go:89] found id: ""
	I1002 07:47:23.413998  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.414007  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:23.414014  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:23.414122  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:23.446999  447344 cri.go:89] found id: ""
	I1002 07:47:23.447026  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.447036  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:23.447042  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:23.447152  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:23.477351  447344 cri.go:89] found id: ""
	I1002 07:47:23.477418  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.477441  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:23.477464  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:23.477555  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:23.503744  447344 cri.go:89] found id: ""
	I1002 07:47:23.503772  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.503781  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:23.503788  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:23.503847  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:23.536169  447344 cri.go:89] found id: ""
	I1002 07:47:23.536194  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.536203  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:23.536213  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:23.536225  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:23.650595  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:23.650632  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:23.671009  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:23.671039  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:23.752854  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:23.752872  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:23.752889  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:23.792445  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:23.792484  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:47:24.439828  460071 pod_ready.go:104] pod "coredns-66bc5c9577-5fglk" is not "Ready", error: <nil>
	W1002 07:47:26.441305  460071 pod_ready.go:104] pod "coredns-66bc5c9577-5fglk" is not "Ready", error: <nil>
	I1002 07:47:27.940763  460071 pod_ready.go:94] pod "coredns-66bc5c9577-5fglk" is "Ready"
	I1002 07:47:27.940791  460071 pod_ready.go:86] duration metric: took 5.506218476s for pod "coredns-66bc5c9577-5fglk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:27.943649  460071 pod_ready.go:83] waiting for pod "etcd-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:27.948413  460071 pod_ready.go:94] pod "etcd-pause-422707" is "Ready"
	I1002 07:47:27.948438  460071 pod_ready.go:86] duration metric: took 4.766794ms for pod "etcd-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:27.950903  460071 pod_ready.go:83] waiting for pod "kube-apiserver-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:26.330284  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:26.340368  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:26.340442  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:26.369685  447344 cri.go:89] found id: ""
	I1002 07:47:26.369711  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.369720  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:26.369728  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:26.369788  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:26.396524  447344 cri.go:89] found id: ""
	I1002 07:47:26.396552  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.396562  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:26.396569  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:26.396655  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:26.426888  447344 cri.go:89] found id: ""
	I1002 07:47:26.426915  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.426930  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:26.426938  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:26.427025  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:26.459403  447344 cri.go:89] found id: ""
	I1002 07:47:26.459427  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.459436  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:26.459442  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:26.459523  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:26.485083  447344 cri.go:89] found id: ""
	I1002 07:47:26.485107  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.485116  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:26.485123  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:26.485189  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:26.510887  447344 cri.go:89] found id: ""
	I1002 07:47:26.510914  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.510924  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:26.510931  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:26.511000  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:26.539513  447344 cri.go:89] found id: ""
	I1002 07:47:26.539538  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.539547  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:26.539553  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:26.539614  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:26.569497  447344 cri.go:89] found id: ""
	I1002 07:47:26.569523  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.569533  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:26.569543  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:26.569554  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:26.606001  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:26.606036  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:26.639212  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:26.639241  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:26.759608  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:26.759648  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:26.776023  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:26.776061  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:26.843605  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:29.343844  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:29.353821  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:29.353896  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:29.384551  447344 cri.go:89] found id: ""
	I1002 07:47:29.384577  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.384587  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:29.384597  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:29.384654  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:29.410405  447344 cri.go:89] found id: ""
	I1002 07:47:29.410433  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.410442  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:29.410454  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:29.410530  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:29.436136  447344 cri.go:89] found id: ""
	I1002 07:47:29.436163  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.436172  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:29.436179  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:29.436245  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:29.468270  447344 cri.go:89] found id: ""
	I1002 07:47:29.468295  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.468311  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:29.468322  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:29.468384  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:29.499036  447344 cri.go:89] found id: ""
	I1002 07:47:29.499061  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.499070  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:29.499077  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:29.499164  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:29.526252  447344 cri.go:89] found id: ""
	I1002 07:47:29.526279  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.526295  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:29.526304  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:29.526364  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:29.552735  447344 cri.go:89] found id: ""
	I1002 07:47:29.552764  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.552773  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:29.552780  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:29.552856  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:29.579849  447344 cri.go:89] found id: ""
	I1002 07:47:29.579875  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.579884  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:29.579894  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:29.579905  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:29.695358  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:29.695402  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:29.711633  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:29.711663  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:29.781134  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:29.781156  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:29.781179  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:29.817012  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:29.817049  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:29.456859  460071 pod_ready.go:94] pod "kube-apiserver-pause-422707" is "Ready"
	I1002 07:47:29.456883  460071 pod_ready.go:86] duration metric: took 1.505953302s for pod "kube-apiserver-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:29.460575  460071 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 07:47:31.466943  460071 pod_ready.go:104] pod "kube-controller-manager-pause-422707" is not "Ready", error: <nil>
	I1002 07:47:31.966632  460071 pod_ready.go:94] pod "kube-controller-manager-pause-422707" is "Ready"
	I1002 07:47:31.966659  460071 pod_ready.go:86] duration metric: took 2.506061859s for pod "kube-controller-manager-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:31.969027  460071 pod_ready.go:83] waiting for pod "kube-proxy-mjj7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:31.973792  460071 pod_ready.go:94] pod "kube-proxy-mjj7w" is "Ready"
	I1002 07:47:31.973819  460071 pod_ready.go:86] duration metric: took 4.766711ms for pod "kube-proxy-mjj7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:32.139215  460071 pod_ready.go:83] waiting for pod "kube-scheduler-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 07:47:34.145068  460071 pod_ready.go:104] pod "kube-scheduler-pause-422707" is not "Ready", error: <nil>
	I1002 07:47:34.646042  460071 pod_ready.go:94] pod "kube-scheduler-pause-422707" is "Ready"
	I1002 07:47:34.646071  460071 pod_ready.go:86] duration metric: took 2.506827373s for pod "kube-scheduler-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:34.646085  460071 pod_ready.go:40] duration metric: took 12.216302403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:47:34.707742  460071 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 07:47:34.710691  460071 out.go:179] * Done! kubectl is now configured to use "pause-422707" cluster and "default" namespace by default
	I1002 07:47:32.348641  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:32.359837  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:32.359913  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:32.389393  447344 cri.go:89] found id: ""
	I1002 07:47:32.389417  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.389426  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:32.389433  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:32.389494  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:32.416857  447344 cri.go:89] found id: ""
	I1002 07:47:32.416881  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.416890  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:32.416896  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:32.416960  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:32.448018  447344 cri.go:89] found id: ""
	I1002 07:47:32.448039  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.448048  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:32.448057  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:32.448116  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:32.479836  447344 cri.go:89] found id: ""
	I1002 07:47:32.479861  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.479869  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:32.479876  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:32.479945  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:32.506181  447344 cri.go:89] found id: ""
	I1002 07:47:32.506207  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.506217  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:32.506224  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:32.506317  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:32.539470  447344 cri.go:89] found id: ""
	I1002 07:47:32.539533  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.539551  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:32.539558  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:32.539615  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:32.570033  447344 cri.go:89] found id: ""
	I1002 07:47:32.570064  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.570073  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:32.570080  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:32.570139  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:32.595512  447344 cri.go:89] found id: ""
	I1002 07:47:32.595538  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.595547  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:32.595556  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:32.595567  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:32.711274  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:32.711315  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:32.727547  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:32.727577  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:32.798460  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:32.798492  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:32.798516  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:32.834943  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:32.834981  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.559833598Z" level=info msg="Started container" PID=2348 containerID=b8364eff63eb27502280c15e72f050b391d6c48bdc1e0b15e12b991cbe65b4e2 description=kube-system/etcd-pause-422707/etcd id=b57f6ab4-acd3-4628-a9ce-a18b73e82f93 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcdfa6de9aae1cd0db1cceeb7e3403b10ac8e58a02eb8b8b16add2c6e91df41
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.560247304Z" level=info msg="Starting container: de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5" id=a9a3a530-41d4-4ec2-a1d4-207c10cf65fc name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.561576448Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.562148818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.569855239Z" level=info msg="Started container" PID=2356 containerID=de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5 description=kube-system/kube-proxy-mjj7w/kube-proxy id=a9a3a530-41d4-4ec2-a1d4-207c10cf65fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=08ba7f83c7655621b6ed136ecc0b549f2fe59bd083c08c5d7f090180c947bf5f
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.570697978Z" level=info msg="Started container" PID=2358 containerID=d120fcee17433144b61042570d7426dbbea18ad38caae066f3c488e1d546fa5f description=kube-system/kindnet-gkbbj/kindnet-cni id=b7f81b13-6343-461c-9ce0-f4f988768ecb name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e49edcdb9062f1f98723c2a53af28d47363195bad4453dda0e2f1cce9614cfb
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.574399351Z" level=info msg="Started container" PID=2341 containerID=7417b7c7f3bfda98962f017b5a0510c9c2693d339c94453d0849e7de2eb9d8d4 description=kube-system/kube-apiserver-pause-422707/kube-apiserver id=f4b27314-91d3-4688-a866-1fb2fa099bcd name=/runtime.v1.RuntimeService/StartContainer sandboxID=897f0a0b9a69773bb7025d60de1ff8f36b9965c7b680797947fd1a2821c58483
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.605219257Z" level=info msg="Created container bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068: kube-system/coredns-66bc5c9577-5fglk/coredns" id=67ed32bf-3840-405e-9fce-c6608de167e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.605894611Z" level=info msg="Starting container: bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068" id=0fc12c14-75ea-44cd-bac4-1ff84e6eefd7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.608534313Z" level=info msg="Started container" PID=2389 containerID=bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068 description=kube-system/coredns-66bc5c9577-5fglk/coredns id=0fc12c14-75ea-44cd-bac4-1ff84e6eefd7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e97169008a73694d86720e20c126bed7990928193e04b6dc665b1021619b5c4
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.920475018Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.924075894Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.924112177Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.924136112Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.927024826Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.92706147Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.927109028Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.930376535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.930413031Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.930437155Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.933654897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.933692263Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.933716977Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.941169972Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.941205763Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	bd2ad8230b36a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   4e97169008a73       coredns-66bc5c9577-5fglk               kube-system
	d120fcee17433       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   7e49edcdb9062       kindnet-gkbbj                          kube-system
	de61fc1c61af2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   08ba7f83c7655       kube-proxy-mjj7w                       kube-system
	b8364eff63eb2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   3dcdfa6de9aae       etcd-pause-422707                      kube-system
	7417b7c7f3bfd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   897f0a0b9a697       kube-apiserver-pause-422707            kube-system
	f7bae3cd05925       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   d837912379b42       kube-scheduler-pause-422707            kube-system
	cdd11ede7258f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   c99c141b49bed       kube-controller-manager-pause-422707   kube-system
	7779786dbfb40       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   4e97169008a73       coredns-66bc5c9577-5fglk               kube-system
	fff7fe0cc7b8b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   08ba7f83c7655       kube-proxy-mjj7w                       kube-system
	4c3b3cd93e322       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   7e49edcdb9062       kindnet-gkbbj                          kube-system
	905cd7e5dfd7e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   d837912379b42       kube-scheduler-pause-422707            kube-system
	e1049b358ad25       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   c99c141b49bed       kube-controller-manager-pause-422707   kube-system
	36a0edc3f91c5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   897f0a0b9a697       kube-apiserver-pause-422707            kube-system
	bf6dbc138db36       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   3dcdfa6de9aae       etcd-pause-422707                      kube-system
	
	
	==> coredns [7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49223 - 26368 "HINFO IN 4398010615553394589.7569334930726868245. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022762105s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42260 - 11084 "HINFO IN 4997789750218908676.854079443307176604. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017617775s
	
	
	==> describe nodes <==
	Name:               pause-422707
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-422707
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=pause-422707
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_46_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:46:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-422707
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:47:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:47:00 +0000   Thu, 02 Oct 2025 07:46:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:47:00 +0000   Thu, 02 Oct 2025 07:46:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:47:00 +0000   Thu, 02 Oct 2025 07:46:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:47:00 +0000   Thu, 02 Oct 2025 07:47:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-422707
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0dd4ccaadc142828afec45a8ed1f363
	  System UUID:                58298745-8344-40a9-8a8a-d872fb025589
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5fglk                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-422707                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-gkbbj                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-422707             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-422707    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-mjj7w                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-422707             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   Starting                 92s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     91s (x8 over 92s)  kubelet          Node pause-422707 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    91s (x8 over 92s)  kubelet          Node pause-422707 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  91s (x8 over 92s)  kubelet          Node pause-422707 status is now: NodeHasSufficientMemory
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-422707 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-422707 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-422707 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s                node-controller  Node pause-422707 event: Registered Node pause-422707 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-422707 status is now: NodeReady
	  Normal   RegisteredNode           14s                node-controller  Node pause-422707 event: Registered Node pause-422707 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:16] overlayfs: idmapped layers are currently not supported
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:30] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:31] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b8364eff63eb27502280c15e72f050b391d6c48bdc1e0b15e12b991cbe65b4e2] <==
	{"level":"warn","ts":"2025-10-02T07:47:19.008073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.029300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.052760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.072181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.090789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.108156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.129483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.144159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.172283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.191769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.227900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.258038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.273917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.291958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.313784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.342227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.352255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.369928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.387373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.405783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.431717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.494288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.495997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.547517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.641100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54236","server-name":"","error":"EOF"}
	
	
	==> etcd [bf6dbc138db362cfff432db0b771a54903e496cfb0cf5bd18097881ec91376c4] <==
	{"level":"warn","ts":"2025-10-02T07:46:11.256198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.267863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.287953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.326174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.346197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.358813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.448056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59680","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:47:05.778152Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T07:47:05.778224Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-422707","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-02T07:47:05.778328Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:47:05.778400Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:47:06.069786Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:47:06.069851Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-02T07:47:06.069923Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:47:06.069886Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:47:06.069946Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:47:06.069955Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:47:06.069937Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:47:06.069991Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:47:06.070002Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:47:06.070009Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:47:06.073443Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-02T07:47:06.073535Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:47:06.073569Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T07:47:06.073576Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-422707","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 07:47:38 up  2:30,  0 user,  load average: 2.23, 2.80, 2.31
	Linux pause-422707 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b] <==
	I1002 07:46:20.709352       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 07:46:20.710662       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 07:46:20.710884       1 main.go:148] setting mtu 1500 for CNI 
	I1002 07:46:20.710928       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 07:46:20.711131       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T07:46:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 07:46:20.912098       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 07:46:20.912125       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 07:46:20.912134       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 07:46:20.912693       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 07:46:50.912333       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 07:46:50.912455       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 07:46:50.913573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 07:46:51.000207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1002 07:46:52.212801       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 07:46:52.212846       1 metrics.go:72] Registering metrics
	I1002 07:46:52.212907       1 controller.go:711] "Syncing nftables rules"
	I1002 07:47:00.912110       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 07:47:00.912164       1 main.go:301] handling current node
	
	
	==> kindnet [d120fcee17433144b61042570d7426dbbea18ad38caae066f3c488e1d546fa5f] <==
	I1002 07:47:15.709451       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 07:47:15.709668       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 07:47:15.709794       1 main.go:148] setting mtu 1500 for CNI 
	I1002 07:47:15.709856       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 07:47:15.709896       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T07:47:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 07:47:15.918427       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 07:47:15.927166       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 07:47:15.927294       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 07:47:15.927472       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 07:47:21.129448       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 07:47:21.129871       1 metrics.go:72] Registering metrics
	I1002 07:47:21.129975       1 controller.go:711] "Syncing nftables rules"
	I1002 07:47:25.920048       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 07:47:25.920133       1 main.go:301] handling current node
	I1002 07:47:35.918455       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 07:47:35.918504       1 main.go:301] handling current node
	
	
	==> kube-apiserver [36a0edc3f91c599e64798a3222fc111e434ab4a719442e7564de7ee2187ca26a] <==
	W1002 07:47:05.797301       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797348       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797390       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797458       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797518       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797564       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797621       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797682       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797770       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797815       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797854       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797897       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797941       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797987       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.798030       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.798074       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.798118       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.798866       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799602       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799712       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799799       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799883       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799966       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.800322       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.803578       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7417b7c7f3bfda98962f017b5a0510c9c2693d339c94453d0849e7de2eb9d8d4] <==
	I1002 07:47:21.011941       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 07:47:21.041824       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:47:21.065836       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 07:47:21.066164       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:47:21.073021       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:47:21.073112       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:47:21.073288       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 07:47:21.073351       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:47:21.073444       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 07:47:21.073512       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 07:47:21.073580       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 07:47:21.073646       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 07:47:21.092104       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:47:21.092268       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:47:21.093156       1 aggregator.go:171] initial CRD sync complete...
	I1002 07:47:21.093719       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:47:21.093777       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:47:21.093807       1 cache.go:39] Caches are synced for autoregister controller
	E1002 07:47:21.114671       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 07:47:21.692788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:47:22.903367       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 07:47:24.336249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:47:24.582057       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:47:24.635308       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:47:24.684518       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [cdd11ede7258ff6809046b22ade252d706e70a12ce550aebbe4814c12e32f694] <==
	I1002 07:47:24.298785       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 07:47:24.320033       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:47:24.320113       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:47:24.320149       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:47:24.320163       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:47:24.320170       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:47:24.322794       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:47:24.322966       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:47:24.323142       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-422707"
	I1002 07:47:24.323228       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 07:47:24.324912       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:47:24.325454       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 07:47:24.325565       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 07:47:24.327144       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 07:47:24.327239       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 07:47:24.327289       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 07:47:24.327678       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:47:24.328928       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:47:24.329011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:47:24.330509       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:47:24.331639       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:47:24.335696       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 07:47:24.337068       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:47:24.352511       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 07:47:24.355647       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	
	
	==> kube-controller-manager [e1049b358ad259731384916f35ccf90b48b850267f7aed64a45d9db512a3a6d2] <==
	I1002 07:46:19.267852       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:46:19.268079       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 07:46:19.268105       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 07:46:19.268162       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:46:19.272725       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 07:46:19.284937       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 07:46:19.290928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:46:19.295386       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 07:46:19.300739       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:46:19.300922       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:46:19.300986       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:46:19.301036       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:46:19.301078       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:46:19.308837       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:46:19.308964       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:46:19.309844       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 07:46:19.309924       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 07:46:19.315874       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:46:19.316023       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:46:19.316586       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-422707"
	I1002 07:46:19.316700       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 07:46:19.326623       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 07:46:19.338446       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-422707" podCIDRs=["10.244.0.0/24"]
	I1002 07:46:19.368130       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:47:04.323968       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5] <==
	I1002 07:47:18.764535       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:47:19.433666       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:47:21.035198       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:47:21.035245       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 07:47:21.035312       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:47:21.339724       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:47:21.346330       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:47:21.352626       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:47:21.352982       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:47:21.353171       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:47:21.354452       1 config.go:200] "Starting service config controller"
	I1002 07:47:21.354514       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:47:21.354555       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:47:21.354593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:47:21.354632       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:47:21.354658       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:47:21.375171       1 config.go:309] "Starting node config controller"
	I1002 07:47:21.427763       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:47:21.427842       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:47:21.454845       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:47:21.454931       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:47:21.454959       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e] <==
	I1002 07:46:20.699831       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:46:20.792936       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:46:20.893600       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:46:20.893638       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 07:46:20.893723       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:46:20.915793       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:46:20.915914       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:46:20.920762       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:46:20.921170       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:46:20.921245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:46:20.922500       1 config.go:200] "Starting service config controller"
	I1002 07:46:20.922572       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:46:20.922621       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:46:20.922655       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:46:20.922704       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:46:20.922729       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:46:20.923662       1 config.go:309] "Starting node config controller"
	I1002 07:46:20.923730       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:46:20.923761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:46:21.023663       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:46:21.023772       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:46:21.023801       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [905cd7e5dfd7ea9891c435d909e83a9b93ede8e42ba50c4ca101e96e91b91bcd] <==
	E1002 07:46:12.303191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:46:12.303231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:46:12.303307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:46:12.303363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:46:12.303478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:46:12.303552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:46:12.303611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:46:13.182809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:46:13.194456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:46:13.216402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:46:13.234767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:46:13.257297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:46:13.257579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:46:13.323423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:46:13.383333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:46:13.387735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:46:13.435376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:46:13.590781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1002 07:46:15.177974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:47:05.785457       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:47:05.786046       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 07:47:05.786110       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 07:47:05.786161       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 07:47:05.786237       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 07:47:05.786286       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f7bae3cd05925ab12ba039c66e40c1c68b06fd8f8c2effc0320d367c8336d488] <==
	I1002 07:47:19.283430       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:47:22.182276       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:47:22.182320       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:47:22.188237       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 07:47:22.188336       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 07:47:22.188409       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:47:22.188447       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:47:22.188485       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:47:22.188518       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:47:22.188647       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:47:22.188717       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:47:22.288786       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 07:47:22.288949       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:47:22.288946       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.399561    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="557ebaeaf415604bade03417d103c013" pod="kube-system/kube-controller-manager-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: I1002 07:47:15.420635    1306 scope.go:117] "RemoveContainer" containerID="fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.421191    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="12b693a2a055e251c1b61556927a30a4" pod="kube-system/kube-scheduler-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.421394    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44636457c2acad4cb2d7258f7377957e" pod="kube-system/kube-apiserver-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.421571    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e8a1a47dce612b67e76b131801e7387" pod="kube-system/etcd-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.421741    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="557ebaeaf415604bade03417d103c013" pod="kube-system/kube-controller-manager-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.422106    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjj7w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e1cddb37-a181-4f8e-b71c-e8240c6269c6" pod="kube-system/kube-proxy-mjj7w"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: I1002 07:47:15.447275    1306 scope.go:117] "RemoveContainer" containerID="4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.448495    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44636457c2acad4cb2d7258f7377957e" pod="kube-system/kube-apiserver-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.448860    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e8a1a47dce612b67e76b131801e7387" pod="kube-system/etcd-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.449464    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="557ebaeaf415604bade03417d103c013" pod="kube-system/kube-controller-manager-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.450093    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gkbbj\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="409e91ec-a4dc-47dd-9b39-6ddf23e0dad3" pod="kube-system/kindnet-gkbbj"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.450846    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjj7w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e1cddb37-a181-4f8e-b71c-e8240c6269c6" pod="kube-system/kube-proxy-mjj7w"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.451383    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="12b693a2a055e251c1b61556927a30a4" pod="kube-system/kube-scheduler-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.479693    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gkbbj\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="409e91ec-a4dc-47dd-9b39-6ddf23e0dad3" pod="kube-system/kindnet-gkbbj"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.479903    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjj7w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e1cddb37-a181-4f8e-b71c-e8240c6269c6" pod="kube-system/kube-proxy-mjj7w"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480085    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-5fglk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="db096af0-568e-459a-b2a9-3139e8957c8a" pod="kube-system/coredns-66bc5c9577-5fglk"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480254    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="12b693a2a055e251c1b61556927a30a4" pod="kube-system/kube-scheduler-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480410    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44636457c2acad4cb2d7258f7377957e" pod="kube-system/kube-apiserver-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480557    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e8a1a47dce612b67e76b131801e7387" pod="kube-system/etcd-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480717    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="557ebaeaf415604bade03417d103c013" pod="kube-system/kube-controller-manager-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: I1002 07:47:15.480790    1306 scope.go:117] "RemoveContainer" containerID="7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2"
	Oct 02 07:47:35 pause-422707 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 07:47:35 pause-422707 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 07:47:35 pause-422707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-422707 -n pause-422707
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-422707 -n pause-422707: exit status 2 (363.453092ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-422707 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-422707
helpers_test.go:243: (dbg) docker inspect pause-422707:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b",
	        "Created": "2025-10-02T07:45:47.046314868Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 455826,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:45:47.114037347Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b/hosts",
	        "LogPath": "/var/lib/docker/containers/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b/7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b-json.log",
	        "Name": "/pause-422707",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-422707:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-422707",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d708e6feb9fe71a3bfff6208e6e1660afce026103466341af50357737db414b",
	                "LowerDir": "/var/lib/docker/overlay2/d2ac33d6bea0c6956c76633f936e852aadd17a3f2d6afe8077f7e0a8db132299-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2ac33d6bea0c6956c76633f936e852aadd17a3f2d6afe8077f7e0a8db132299/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2ac33d6bea0c6956c76633f936e852aadd17a3f2d6afe8077f7e0a8db132299/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2ac33d6bea0c6956c76633f936e852aadd17a3f2d6afe8077f7e0a8db132299/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-422707",
	                "Source": "/var/lib/docker/volumes/pause-422707/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-422707",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-422707",
	                "name.minikube.sigs.k8s.io": "pause-422707",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "07a0969bc33fbba3fcd568d6e6238030debac6332c75c4058fba2cdea25bd6a2",
	            "SandboxKey": "/var/run/docker/netns/07a0969bc33f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33373"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33374"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33377"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33375"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33376"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-422707": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:45:ed:24:da:c3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f690f11d2824d5c1d0d4b881867c9a0fa545f04fd81cf4a885ec314b2e8f033c",
	                    "EndpointID": "dcc04cedfcf239918562bd931d0a3a7c1c038a2ec793362f0481bd59a36e26c1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-422707",
	                        "7d708e6feb9f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-422707 -n pause-422707
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-422707 -n pause-422707: exit status 2 (338.877661ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-422707 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-422707 logs -n 25: (1.407471296s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-050176 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:41 UTC │ 02 Oct 25 07:42 UTC │
	│ start   │ -p missing-upgrade-857609 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-857609    │ jenkins │ v1.32.0 │ 02 Oct 25 07:41 UTC │ 02 Oct 25 07:42 UTC │
	│ start   │ -p NoKubernetes-050176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:42 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p missing-upgrade-857609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-857609    │ jenkins │ v1.37.0 │ 02 Oct 25 07:42 UTC │ 02 Oct 25 07:43 UTC │
	│ delete  │ -p NoKubernetes-050176                                                                                                                   │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p NoKubernetes-050176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ ssh     │ -p NoKubernetes-050176 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │                     │
	│ stop    │ -p NoKubernetes-050176                                                                                                                   │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p NoKubernetes-050176 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ ssh     │ -p NoKubernetes-050176 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │                     │
	│ delete  │ -p NoKubernetes-050176                                                                                                                   │ NoKubernetes-050176       │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-011391 │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:44 UTC │
	│ delete  │ -p missing-upgrade-857609                                                                                                                │ missing-upgrade-857609    │ jenkins │ v1.37.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:43 UTC │
	│ start   │ -p stopped-upgrade-151473 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-151473    │ jenkins │ v1.32.0 │ 02 Oct 25 07:43 UTC │ 02 Oct 25 07:44 UTC │
	│ stop    │ -p kubernetes-upgrade-011391                                                                                                             │ kubernetes-upgrade-011391 │ jenkins │ v1.37.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:44 UTC │
	│ start   │ -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-011391 │ jenkins │ v1.37.0 │ 02 Oct 25 07:44 UTC │                     │
	│ stop    │ stopped-upgrade-151473 stop                                                                                                              │ stopped-upgrade-151473    │ jenkins │ v1.32.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:44 UTC │
	│ start   │ -p stopped-upgrade-151473 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-151473    │ jenkins │ v1.37.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:44 UTC │
	│ delete  │ -p stopped-upgrade-151473                                                                                                                │ stopped-upgrade-151473    │ jenkins │ v1.37.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:44 UTC │
	│ start   │ -p running-upgrade-838161 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-838161    │ jenkins │ v1.32.0 │ 02 Oct 25 07:44 UTC │ 02 Oct 25 07:45 UTC │
	│ start   │ -p running-upgrade-838161 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-838161    │ jenkins │ v1.37.0 │ 02 Oct 25 07:45 UTC │ 02 Oct 25 07:45 UTC │
	│ delete  │ -p running-upgrade-838161                                                                                                                │ running-upgrade-838161    │ jenkins │ v1.37.0 │ 02 Oct 25 07:45 UTC │ 02 Oct 25 07:45 UTC │
	│ start   │ -p pause-422707 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-422707              │ jenkins │ v1.37.0 │ 02 Oct 25 07:45 UTC │ 02 Oct 25 07:47 UTC │
	│ start   │ -p pause-422707 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-422707              │ jenkins │ v1.37.0 │ 02 Oct 25 07:47 UTC │ 02 Oct 25 07:47 UTC │
	│ pause   │ -p pause-422707 --alsologtostderr -v=5                                                                                                   │ pause-422707              │ jenkins │ v1.37.0 │ 02 Oct 25 07:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:47:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:47:04.374258  460071 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:47:04.374503  460071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:47:04.374534  460071 out.go:374] Setting ErrFile to fd 2...
	I1002 07:47:04.374555  460071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:47:04.374911  460071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:47:04.375398  460071 out.go:368] Setting JSON to false
	I1002 07:47:04.376519  460071 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8976,"bootTime":1759382249,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:47:04.376624  460071 start.go:140] virtualization:  
	I1002 07:47:04.380342  460071 out.go:179] * [pause-422707] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:47:04.383534  460071 notify.go:220] Checking for updates...
	I1002 07:47:04.386418  460071 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:47:04.389253  460071 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:47:04.393182  460071 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:47:04.396141  460071 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:47:04.399436  460071 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:47:04.402264  460071 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:47:04.405622  460071 config.go:182] Loaded profile config "pause-422707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:47:04.406193  460071 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:47:04.437397  460071 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:47:04.437576  460071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:47:04.543012  460071 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:47:04.532775282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:47:04.543200  460071 docker.go:318] overlay module found
	I1002 07:47:04.548265  460071 out.go:179] * Using the docker driver based on existing profile
	I1002 07:47:04.551190  460071 start.go:304] selected driver: docker
	I1002 07:47:04.551213  460071 start.go:924] validating driver "docker" against &{Name:pause-422707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:47:04.551352  460071 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:47:04.551464  460071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:47:04.654462  460071 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:47:04.6417758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:47:04.654925  460071 cni.go:84] Creating CNI manager for ""
	I1002 07:47:04.655001  460071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:47:04.655055  460071 start.go:348] cluster config:
	{Name:pause-422707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:47:04.660061  460071 out.go:179] * Starting "pause-422707" primary control-plane node in "pause-422707" cluster
	I1002 07:47:04.662890  460071 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:47:04.665829  460071 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:47:04.668679  460071 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:47:04.668746  460071 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:47:04.668746  460071 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:47:04.668757  460071 cache.go:58] Caching tarball of preloaded images
	I1002 07:47:04.668858  460071 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:47:04.668867  460071 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:47:04.669007  460071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/config.json ...
	I1002 07:47:04.691993  460071 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:47:04.692028  460071 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:47:04.692050  460071 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:47:04.692078  460071 start.go:360] acquireMachinesLock for pause-422707: {Name:mk8e831218cb50db533345363d2b05f8b5cf7cf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:47:04.692148  460071 start.go:364] duration metric: took 43.348µs to acquireMachinesLock for "pause-422707"
	I1002 07:47:04.692170  460071 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:47:04.692187  460071 fix.go:54] fixHost starting: 
	I1002 07:47:04.692672  460071 cli_runner.go:164] Run: docker container inspect pause-422707 --format={{.State.Status}}
	I1002 07:47:04.719904  460071 fix.go:112] recreateIfNeeded on pause-422707: state=Running err=<nil>
	W1002 07:47:04.719933  460071 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:47:01.296466  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:01.306856  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:01.306928  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:01.335146  447344 cri.go:89] found id: ""
	I1002 07:47:01.335172  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.335181  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:01.335188  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:01.335251  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:01.377942  447344 cri.go:89] found id: ""
	I1002 07:47:01.377964  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.377973  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:01.377979  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:01.378036  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:01.410148  447344 cri.go:89] found id: ""
	I1002 07:47:01.410174  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.410184  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:01.410193  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:01.410298  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:01.449591  447344 cri.go:89] found id: ""
	I1002 07:47:01.449618  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.449628  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:01.449634  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:01.449702  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:01.477647  447344 cri.go:89] found id: ""
	I1002 07:47:01.477673  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.477690  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:01.477697  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:01.477763  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:01.513349  447344 cri.go:89] found id: ""
	I1002 07:47:01.513373  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.513391  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:01.513398  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:01.513454  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:01.540332  447344 cri.go:89] found id: ""
	I1002 07:47:01.540357  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.540367  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:01.540373  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:01.540435  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:01.570253  447344 cri.go:89] found id: ""
	I1002 07:47:01.570279  447344 logs.go:282] 0 containers: []
	W1002 07:47:01.570289  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:01.570302  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:01.570313  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:01.689548  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:01.689588  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:01.707435  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:01.707469  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:01.782322  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:01.782345  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:01.782358  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:01.820211  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:01.820252  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:04.352012  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:04.364147  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:04.364223  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:04.398407  447344 cri.go:89] found id: ""
	I1002 07:47:04.398427  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.398435  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:04.398442  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:04.398503  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:04.439369  447344 cri.go:89] found id: ""
	I1002 07:47:04.439395  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.439404  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:04.439410  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:04.439472  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:04.482068  447344 cri.go:89] found id: ""
	I1002 07:47:04.482089  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.482098  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:04.482104  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:04.482173  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:04.518232  447344 cri.go:89] found id: ""
	I1002 07:47:04.518260  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.518270  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:04.518277  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:04.518335  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:04.555468  447344 cri.go:89] found id: ""
	I1002 07:47:04.555490  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.555499  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:04.555506  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:04.555566  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:04.590180  447344 cri.go:89] found id: ""
	I1002 07:47:04.590203  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.590212  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:04.590219  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:04.590282  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:04.636300  447344 cri.go:89] found id: ""
	I1002 07:47:04.636321  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.636330  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:04.636336  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:04.636399  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:04.672396  447344 cri.go:89] found id: ""
	I1002 07:47:04.672421  447344 logs.go:282] 0 containers: []
	W1002 07:47:04.672430  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:04.672440  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:04.672452  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:04.817232  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:04.817303  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:04.841414  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:04.841493  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:04.926654  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:04.926680  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:04.926692  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:04.972059  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:04.972135  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:04.723047  460071 out.go:252] * Updating the running docker "pause-422707" container ...
	I1002 07:47:04.723318  460071 machine.go:93] provisionDockerMachine start ...
	I1002 07:47:04.723433  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:04.757828  460071 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:04.758269  460071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1002 07:47:04.758292  460071 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:47:04.899052  460071 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-422707
	
	I1002 07:47:04.899145  460071 ubuntu.go:182] provisioning hostname "pause-422707"
	I1002 07:47:04.899263  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:04.926325  460071 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:04.927273  460071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1002 07:47:04.927292  460071 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-422707 && echo "pause-422707" | sudo tee /etc/hostname
	I1002 07:47:05.089889  460071 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-422707
	
	I1002 07:47:05.089981  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:05.111271  460071 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:05.111600  460071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1002 07:47:05.111623  460071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-422707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-422707/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-422707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:47:05.248151  460071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:47:05.248189  460071 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:47:05.248210  460071 ubuntu.go:190] setting up certificates
	I1002 07:47:05.248220  460071 provision.go:84] configureAuth start
	I1002 07:47:05.248292  460071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-422707
	I1002 07:47:05.271931  460071 provision.go:143] copyHostCerts
	I1002 07:47:05.272000  460071 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:47:05.272022  460071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:47:05.272212  460071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:47:05.272331  460071 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:47:05.272344  460071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:47:05.272375  460071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:47:05.272444  460071 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:47:05.272454  460071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:47:05.272480  460071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:47:05.272539  460071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.pause-422707 san=[127.0.0.1 192.168.85.2 localhost minikube pause-422707]
	I1002 07:47:05.426935  460071 provision.go:177] copyRemoteCerts
	I1002 07:47:05.427005  460071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:47:05.427045  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:05.445580  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:05.543282  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:47:05.561945  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 07:47:05.580202  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:47:05.598609  460071 provision.go:87] duration metric: took 350.360326ms to configureAuth
	I1002 07:47:05.598635  460071 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:47:05.598885  460071 config.go:182] Loaded profile config "pause-422707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:47:05.599013  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:05.623398  460071 main.go:141] libmachine: Using SSH client type: native
	I1002 07:47:05.623721  460071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1002 07:47:05.623742  460071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 07:47:07.515692  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:07.526342  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:07.526416  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:07.551984  447344 cri.go:89] found id: ""
	I1002 07:47:07.552010  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.552021  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:07.552028  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:07.552085  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:07.576730  447344 cri.go:89] found id: ""
	I1002 07:47:07.576753  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.576763  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:07.576769  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:07.576833  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:07.602747  447344 cri.go:89] found id: ""
	I1002 07:47:07.602772  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.602788  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:07.602794  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:07.602855  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:07.630039  447344 cri.go:89] found id: ""
	I1002 07:47:07.630065  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.630075  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:07.630082  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:07.630147  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:07.656492  447344 cri.go:89] found id: ""
	I1002 07:47:07.656518  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.656528  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:07.656535  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:07.656595  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:07.682455  447344 cri.go:89] found id: ""
	I1002 07:47:07.682483  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.682493  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:07.682500  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:07.682561  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:07.710747  447344 cri.go:89] found id: ""
	I1002 07:47:07.710772  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.710790  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:07.710797  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:07.710856  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:07.736099  447344 cri.go:89] found id: ""
	I1002 07:47:07.736126  447344 logs.go:282] 0 containers: []
	W1002 07:47:07.736135  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:07.736145  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:07.736157  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:07.847283  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:07.847321  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:07.863644  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:07.863731  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:07.935998  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:07.936019  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:07.936032  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:07.976790  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:07.976840  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:11.024908  460071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 07:47:11.024931  460071 machine.go:96] duration metric: took 6.301600949s to provisionDockerMachine
	I1002 07:47:11.024943  460071 start.go:293] postStartSetup for "pause-422707" (driver="docker")
	I1002 07:47:11.024954  460071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:47:11.025024  460071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:47:11.025063  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:11.048006  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:11.148666  460071 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:47:11.152523  460071 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:47:11.152554  460071 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:47:11.152566  460071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 07:47:11.152625  460071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 07:47:11.152713  460071 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 07:47:11.152822  460071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 07:47:11.161342  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:47:11.180692  460071 start.go:296] duration metric: took 155.732643ms for postStartSetup
	I1002 07:47:11.180792  460071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:47:11.180840  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:11.198919  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:11.292406  460071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:47:11.297725  460071 fix.go:56] duration metric: took 6.605535897s for fixHost
	I1002 07:47:11.297753  460071 start.go:83] releasing machines lock for "pause-422707", held for 6.605593982s
	I1002 07:47:11.297824  460071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-422707
	I1002 07:47:11.314915  460071 ssh_runner.go:195] Run: cat /version.json
	I1002 07:47:11.314934  460071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:47:11.314967  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:11.314997  460071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-422707
	I1002 07:47:11.336948  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:11.339707  460071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/pause-422707/id_rsa Username:docker}
	I1002 07:47:11.515965  460071 ssh_runner.go:195] Run: systemctl --version
	I1002 07:47:11.522457  460071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 07:47:11.572359  460071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:47:11.576896  460071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:47:11.576976  460071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:47:11.584761  460071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:47:11.584783  460071 start.go:495] detecting cgroup driver to use...
	I1002 07:47:11.584815  460071 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:47:11.584861  460071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 07:47:11.600150  460071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 07:47:11.614242  460071 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:47:11.614334  460071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:47:11.630623  460071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:47:11.644410  460071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:47:11.786533  460071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:47:11.945574  460071 docker.go:234] disabling docker service ...
	I1002 07:47:11.945639  460071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:47:11.961726  460071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:47:11.976646  460071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:47:12.116768  460071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:47:12.256912  460071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:47:12.270192  460071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:47:12.285234  460071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 07:47:12.285309  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.295062  460071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 07:47:12.295190  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.305321  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.314210  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.323217  460071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:47:12.332614  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.341307  460071 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.350016  460071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 07:47:12.359060  460071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:47:12.366908  460071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:47:12.374653  460071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:47:12.510486  460071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 07:47:12.686286  460071 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 07:47:12.686379  460071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 07:47:12.690175  460071 start.go:563] Will wait 60s for crictl version
	I1002 07:47:12.690286  460071 ssh_runner.go:195] Run: which crictl
	I1002 07:47:12.693986  460071 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:47:12.719439  460071 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 07:47:12.719581  460071 ssh_runner.go:195] Run: crio --version
	I1002 07:47:12.748406  460071 ssh_runner.go:195] Run: crio --version
	I1002 07:47:12.781747  460071 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 07:47:12.784833  460071 cli_runner.go:164] Run: docker network inspect pause-422707 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:47:12.800384  460071 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 07:47:12.804384  460071 kubeadm.go:883] updating cluster {Name:pause-422707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:47:12.804535  460071 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:47:12.804594  460071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:47:12.838049  460071 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:47:12.838078  460071 crio.go:433] Images already preloaded, skipping extraction
	I1002 07:47:12.838135  460071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:47:12.868696  460071 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 07:47:12.868721  460071 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:47:12.868729  460071 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 07:47:12.868849  460071 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-422707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:47:12.868937  460071 ssh_runner.go:195] Run: crio config
	I1002 07:47:12.934263  460071 cni.go:84] Creating CNI manager for ""
	I1002 07:47:12.934297  460071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:47:12.934316  460071 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:47:12.934370  460071 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-422707 NodeName:pause-422707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:47:12.934546  460071 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-422707"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:47:12.934635  460071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:47:12.943448  460071 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:47:12.943543  460071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:47:12.951140  460071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 07:47:12.964030  460071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:47:12.978548  460071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1002 07:47:12.991826  460071 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:47:12.995804  460071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:47:13.129338  460071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:47:13.144304  460071 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707 for IP: 192.168.85.2
	I1002 07:47:13.144325  460071 certs.go:195] generating shared ca certs ...
	I1002 07:47:13.144341  460071 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:13.144474  460071 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 07:47:13.144521  460071 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 07:47:13.144532  460071 certs.go:257] generating profile certs ...
	I1002 07:47:13.144632  460071 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.key
	I1002 07:47:13.144700  460071 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/apiserver.key.b8eed788
	I1002 07:47:13.144752  460071 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/proxy-client.key
	I1002 07:47:13.144859  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 07:47:13.144889  460071 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 07:47:13.144904  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:47:13.144930  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 07:47:13.144960  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:47:13.144984  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 07:47:13.145031  460071 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 07:47:13.145631  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:47:13.165041  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 07:47:13.183626  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:47:13.201522  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 07:47:13.220053  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 07:47:13.237598  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:47:13.254061  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:47:13.270958  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:47:13.288223  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 07:47:13.308498  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 07:47:13.325984  460071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:47:13.345088  460071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:47:13.358327  460071 ssh_runner.go:195] Run: openssl version
	I1002 07:47:13.364613  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 07:47:13.372763  460071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 07:47:13.376940  460071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 07:47:13.377013  460071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 07:47:13.420567  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 07:47:13.428980  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 07:47:13.437900  460071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 07:47:13.442024  460071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 07:47:13.442096  460071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 07:47:13.484152  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:47:13.492860  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:47:13.501456  460071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:47:13.505665  460071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:47:13.505741  460071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:47:13.547327  460071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:47:13.555747  460071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:47:13.560508  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:47:13.604165  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:47:13.649056  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:47:13.691734  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:47:13.735455  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:47:13.779364  460071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:47:13.823035  460071 kubeadm.go:400] StartCluster: {Name:pause-422707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-422707 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:47:13.823252  460071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 07:47:13.823357  460071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:47:13.866864  460071 cri.go:89] found id: "7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2"
	I1002 07:47:13.866938  460071 cri.go:89] found id: "fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e"
	I1002 07:47:13.866959  460071 cri.go:89] found id: "4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b"
	I1002 07:47:13.866979  460071 cri.go:89] found id: "905cd7e5dfd7ea9891c435d909e83a9b93ede8e42ba50c4ca101e96e91b91bcd"
	I1002 07:47:13.867017  460071 cri.go:89] found id: "e1049b358ad259731384916f35ccf90b48b850267f7aed64a45d9db512a3a6d2"
	I1002 07:47:13.867041  460071 cri.go:89] found id: "36a0edc3f91c599e64798a3222fc111e434ab4a719442e7564de7ee2187ca26a"
	I1002 07:47:13.867063  460071 cri.go:89] found id: "bf6dbc138db362cfff432db0b771a54903e496cfb0cf5bd18097881ec91376c4"
	I1002 07:47:13.867183  460071 cri.go:89] found id: ""
	I1002 07:47:13.867280  460071 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 07:47:13.880015  460071 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T07:47:13Z" level=error msg="open /run/runc: no such file or directory"
	I1002 07:47:13.880169  460071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:47:13.892241  460071 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:47:13.892309  460071 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:47:13.892401  460071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:47:13.902318  460071 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:47:13.903225  460071 kubeconfig.go:125] found "pause-422707" server: "https://192.168.85.2:8443"
	I1002 07:47:13.904575  460071 kapi.go:59] client config for pause-422707: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:47:13.905983  460071 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 07:47:13.906042  460071 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 07:47:13.906066  460071 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 07:47:13.906250  460071 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 07:47:13.906297  460071 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 07:47:13.906994  460071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:47:13.921592  460071 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 07:47:13.921627  460071 kubeadm.go:601] duration metric: took 29.297891ms to restartPrimaryControlPlane
	I1002 07:47:13.921636  460071 kubeadm.go:402] duration metric: took 98.612057ms to StartCluster
	I1002 07:47:13.921652  460071 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:13.921730  460071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:47:13.922575  460071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:47:13.922835  460071 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 07:47:13.923052  460071 config.go:182] Loaded profile config "pause-422707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:47:13.923106  460071 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:47:13.927663  460071 out.go:179] * Enabled addons: 
	I1002 07:47:13.927775  460071 out.go:179] * Verifying Kubernetes components...
	I1002 07:47:13.931438  460071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:47:13.931894  460071 addons.go:514] duration metric: took 8.777412ms for enable addons: enabled=[]
	I1002 07:47:14.112820  460071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:47:14.129394  460071 node_ready.go:35] waiting up to 6m0s for node "pause-422707" to be "Ready" ...
	I1002 07:47:10.509542  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:10.519913  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:10.519986  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:10.548547  447344 cri.go:89] found id: ""
	I1002 07:47:10.548576  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.548586  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:10.548593  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:10.548652  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:10.573108  447344 cri.go:89] found id: ""
	I1002 07:47:10.573136  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.573146  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:10.573152  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:10.573210  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:10.598730  447344 cri.go:89] found id: ""
	I1002 07:47:10.598753  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.598762  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:10.598768  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:10.598840  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:10.625353  447344 cri.go:89] found id: ""
	I1002 07:47:10.625380  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.625390  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:10.625396  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:10.625501  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:10.651756  447344 cri.go:89] found id: ""
	I1002 07:47:10.651781  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.651791  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:10.651798  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:10.651856  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:10.678025  447344 cri.go:89] found id: ""
	I1002 07:47:10.678056  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.678065  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:10.678072  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:10.678140  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:10.705869  447344 cri.go:89] found id: ""
	I1002 07:47:10.705895  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.705904  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:10.705910  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:10.705983  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:10.731672  447344 cri.go:89] found id: ""
	I1002 07:47:10.731694  447344 logs.go:282] 0 containers: []
	W1002 07:47:10.731703  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:10.731727  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:10.731745  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:10.800873  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:10.800899  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:10.800916  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:10.841843  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:10.841926  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:10.879486  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:10.879566  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:11.007003  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:11.007131  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:13.536958  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:13.548915  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:13.548987  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:13.593431  447344 cri.go:89] found id: ""
	I1002 07:47:13.593459  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.593468  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:13.593475  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:13.593535  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:13.630994  447344 cri.go:89] found id: ""
	I1002 07:47:13.631021  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.631030  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:13.631037  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:13.631140  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:13.669485  447344 cri.go:89] found id: ""
	I1002 07:47:13.669514  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.669523  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:13.669530  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:13.669589  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:13.697680  447344 cri.go:89] found id: ""
	I1002 07:47:13.697721  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.697730  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:13.697737  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:13.697804  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:13.735944  447344 cri.go:89] found id: ""
	I1002 07:47:13.735970  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.735980  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:13.735987  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:13.736043  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:13.769782  447344 cri.go:89] found id: ""
	I1002 07:47:13.769811  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.769820  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:13.769827  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:13.769888  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:13.809661  447344 cri.go:89] found id: ""
	I1002 07:47:13.809690  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.809699  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:13.809706  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:13.809766  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:13.840213  447344 cri.go:89] found id: ""
	I1002 07:47:13.840242  447344 logs.go:282] 0 containers: []
	W1002 07:47:13.840250  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:13.840259  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:13.840274  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:13.876171  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:13.876203  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:14.019615  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:14.019722  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:14.041302  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:14.041340  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:14.122048  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:14.122070  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:14.122083  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:16.661906  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:16.680850  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:16.680920  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:16.733530  447344 cri.go:89] found id: ""
	I1002 07:47:16.733553  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.733561  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:16.733568  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:16.733625  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:16.780177  447344 cri.go:89] found id: ""
	I1002 07:47:16.780201  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.780211  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:16.780217  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:16.780275  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:16.821276  447344 cri.go:89] found id: ""
	I1002 07:47:16.821303  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.821313  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:16.821319  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:16.821378  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:16.882463  447344 cri.go:89] found id: ""
	I1002 07:47:16.882488  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.882497  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:16.882507  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:16.882564  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:16.928483  447344 cri.go:89] found id: ""
	I1002 07:47:16.928510  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.928520  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:16.928526  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:16.928584  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:16.978194  447344 cri.go:89] found id: ""
	I1002 07:47:16.978221  447344 logs.go:282] 0 containers: []
	W1002 07:47:16.978231  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:16.978237  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:16.978303  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:17.020494  447344 cri.go:89] found id: ""
	I1002 07:47:17.020521  447344 logs.go:282] 0 containers: []
	W1002 07:47:17.020531  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:17.020538  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:17.020595  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:17.074138  447344 cri.go:89] found id: ""
	I1002 07:47:17.074163  447344 logs.go:282] 0 containers: []
	W1002 07:47:17.074172  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:17.074181  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:17.074192  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:17.118906  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:17.118944  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:17.159763  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:17.159787  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:17.304020  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:17.304100  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:17.340815  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:17.340895  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:17.436310  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:19.936552  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:19.953508  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:19.953580  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:20.009173  447344 cri.go:89] found id: ""
	I1002 07:47:20.009198  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.009207  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:20.009216  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:20.009284  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:20.061337  447344 cri.go:89] found id: ""
	I1002 07:47:20.061435  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.061464  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:20.061472  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:20.061545  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:20.128576  447344 cri.go:89] found id: ""
	I1002 07:47:20.128653  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.128678  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:20.128704  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:20.128804  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:20.181519  447344 cri.go:89] found id: ""
	I1002 07:47:20.181541  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.181549  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:20.181556  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:20.181621  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:20.232491  447344 cri.go:89] found id: ""
	I1002 07:47:20.232517  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.232526  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:20.232532  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:20.232596  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:20.281996  447344 cri.go:89] found id: ""
	I1002 07:47:20.282022  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.282032  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:20.282039  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:20.282101  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:20.331530  447344 cri.go:89] found id: ""
	I1002 07:47:20.331555  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.331564  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:20.331570  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:20.331629  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:20.868981  460071 node_ready.go:49] node "pause-422707" is "Ready"
	I1002 07:47:20.869009  460071 node_ready.go:38] duration metric: took 6.739578461s for node "pause-422707" to be "Ready" ...
	I1002 07:47:20.869023  460071 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:47:20.869086  460071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:20.888531  460071 api_server.go:72] duration metric: took 6.965661497s to wait for apiserver process to appear ...
	I1002 07:47:20.888552  460071 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:47:20.888571  460071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 07:47:20.923027  460071 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 07:47:20.923167  460071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 07:47:21.388682  460071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 07:47:21.404430  460071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 07:47:21.404509  460071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 07:47:21.888688  460071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 07:47:21.921815  460071 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 07:47:21.921939  460071 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 07:47:22.389386  460071 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 07:47:22.397644  460071 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 07:47:22.398664  460071 api_server.go:141] control plane version: v1.34.1
	I1002 07:47:22.398690  460071 api_server.go:131] duration metric: took 1.510130462s to wait for apiserver health ...
	I1002 07:47:22.398699  460071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:47:22.402179  460071 system_pods.go:59] 7 kube-system pods found
	I1002 07:47:22.402223  460071 system_pods.go:61] "coredns-66bc5c9577-5fglk" [db096af0-568e-459a-b2a9-3139e8957c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 07:47:22.402261  460071 system_pods.go:61] "etcd-pause-422707" [267a51be-e04f-4dfa-9823-8af325902dea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 07:47:22.402274  460071 system_pods.go:61] "kindnet-gkbbj" [409e91ec-a4dc-47dd-9b39-6ddf23e0dad3] Running
	I1002 07:47:22.402282  460071 system_pods.go:61] "kube-apiserver-pause-422707" [05e165a9-496a-41b2-8873-5cb063b782df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:47:22.402290  460071 system_pods.go:61] "kube-controller-manager-pause-422707" [5c0db4fc-3582-4549-9edc-87ec0afa87e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:47:22.402300  460071 system_pods.go:61] "kube-proxy-mjj7w" [e1cddb37-a181-4f8e-b71c-e8240c6269c6] Running
	I1002 07:47:22.402323  460071 system_pods.go:61] "kube-scheduler-pause-422707" [6f937270-8ecd-49e0-b66d-85abf2c29010] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 07:47:22.402333  460071 system_pods.go:74] duration metric: took 3.627141ms to wait for pod list to return data ...
	I1002 07:47:22.402345  460071 default_sa.go:34] waiting for default service account to be created ...
	I1002 07:47:22.405306  460071 default_sa.go:45] found service account: "default"
	I1002 07:47:22.405334  460071 default_sa.go:55] duration metric: took 2.982984ms for default service account to be created ...
	I1002 07:47:22.405344  460071 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 07:47:22.408691  460071 system_pods.go:86] 7 kube-system pods found
	I1002 07:47:22.408729  460071 system_pods.go:89] "coredns-66bc5c9577-5fglk" [db096af0-568e-459a-b2a9-3139e8957c8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 07:47:22.408738  460071 system_pods.go:89] "etcd-pause-422707" [267a51be-e04f-4dfa-9823-8af325902dea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 07:47:22.408775  460071 system_pods.go:89] "kindnet-gkbbj" [409e91ec-a4dc-47dd-9b39-6ddf23e0dad3] Running
	I1002 07:47:22.408785  460071 system_pods.go:89] "kube-apiserver-pause-422707" [05e165a9-496a-41b2-8873-5cb063b782df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:47:22.408797  460071 system_pods.go:89] "kube-controller-manager-pause-422707" [5c0db4fc-3582-4549-9edc-87ec0afa87e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:47:22.408801  460071 system_pods.go:89] "kube-proxy-mjj7w" [e1cddb37-a181-4f8e-b71c-e8240c6269c6] Running
	I1002 07:47:22.408813  460071 system_pods.go:89] "kube-scheduler-pause-422707" [6f937270-8ecd-49e0-b66d-85abf2c29010] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 07:47:22.408820  460071 system_pods.go:126] duration metric: took 3.470356ms to wait for k8s-apps to be running ...
	I1002 07:47:22.408857  460071 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 07:47:22.408920  460071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:47:22.422284  460071 system_svc.go:56] duration metric: took 13.429481ms WaitForService to wait for kubelet
	I1002 07:47:22.422314  460071 kubeadm.go:586] duration metric: took 8.499450115s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:47:22.422335  460071 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:47:22.425612  460071 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:47:22.425646  460071 node_conditions.go:123] node cpu capacity is 2
	I1002 07:47:22.425660  460071 node_conditions.go:105] duration metric: took 3.319331ms to run NodePressure ...
	I1002 07:47:22.425674  460071 start.go:241] waiting for startup goroutines ...
	I1002 07:47:22.425681  460071 start.go:246] waiting for cluster config update ...
	I1002 07:47:22.425690  460071 start.go:255] writing updated cluster config ...
	I1002 07:47:22.426034  460071 ssh_runner.go:195] Run: rm -f paused
	I1002 07:47:22.429749  460071 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:47:22.430372  460071 kapi.go:59] client config for pause-422707: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/profiles/pause-422707/client.key", CAFile:"/home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 07:47:22.434545  460071 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5fglk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:20.377299  447344 cri.go:89] found id: ""
	I1002 07:47:20.377369  447344 logs.go:282] 0 containers: []
	W1002 07:47:20.377392  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:20.377419  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:20.377464  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:20.549009  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:20.549047  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:20.566218  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:20.566246  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:20.683290  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:20.683363  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:20.683381  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:20.734581  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:20.734677  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:23.282536  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:23.292424  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:23.292494  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:23.324997  447344 cri.go:89] found id: ""
	I1002 07:47:23.325022  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.325037  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:23.325047  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:23.325110  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:23.350463  447344 cri.go:89] found id: ""
	I1002 07:47:23.350490  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.350500  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:23.350506  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:23.350564  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:23.382900  447344 cri.go:89] found id: ""
	I1002 07:47:23.382931  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.382946  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:23.382952  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:23.383028  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:23.413972  447344 cri.go:89] found id: ""
	I1002 07:47:23.413998  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.414007  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:23.414014  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:23.414122  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:23.446999  447344 cri.go:89] found id: ""
	I1002 07:47:23.447026  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.447036  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:23.447042  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:23.447152  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:23.477351  447344 cri.go:89] found id: ""
	I1002 07:47:23.477418  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.477441  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:23.477464  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:23.477555  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:23.503744  447344 cri.go:89] found id: ""
	I1002 07:47:23.503772  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.503781  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:23.503788  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:23.503847  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:23.536169  447344 cri.go:89] found id: ""
	I1002 07:47:23.536194  447344 logs.go:282] 0 containers: []
	W1002 07:47:23.536203  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:23.536213  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:23.536225  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:23.650595  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:23.650632  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:23.671009  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:23.671039  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:23.752854  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:23.752872  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:23.752889  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:23.792445  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:23.792484  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 07:47:24.439828  460071 pod_ready.go:104] pod "coredns-66bc5c9577-5fglk" is not "Ready", error: <nil>
	W1002 07:47:26.441305  460071 pod_ready.go:104] pod "coredns-66bc5c9577-5fglk" is not "Ready", error: <nil>
	I1002 07:47:27.940763  460071 pod_ready.go:94] pod "coredns-66bc5c9577-5fglk" is "Ready"
	I1002 07:47:27.940791  460071 pod_ready.go:86] duration metric: took 5.506218476s for pod "coredns-66bc5c9577-5fglk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:27.943649  460071 pod_ready.go:83] waiting for pod "etcd-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:27.948413  460071 pod_ready.go:94] pod "etcd-pause-422707" is "Ready"
	I1002 07:47:27.948438  460071 pod_ready.go:86] duration metric: took 4.766794ms for pod "etcd-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:27.950903  460071 pod_ready.go:83] waiting for pod "kube-apiserver-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:26.330284  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:26.340368  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:26.340442  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:26.369685  447344 cri.go:89] found id: ""
	I1002 07:47:26.369711  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.369720  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:26.369728  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:26.369788  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:26.396524  447344 cri.go:89] found id: ""
	I1002 07:47:26.396552  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.396562  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:26.396569  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:26.396655  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:26.426888  447344 cri.go:89] found id: ""
	I1002 07:47:26.426915  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.426930  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:26.426938  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:26.427025  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:26.459403  447344 cri.go:89] found id: ""
	I1002 07:47:26.459427  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.459436  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:26.459442  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:26.459523  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:26.485083  447344 cri.go:89] found id: ""
	I1002 07:47:26.485107  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.485116  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:26.485123  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:26.485189  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:26.510887  447344 cri.go:89] found id: ""
	I1002 07:47:26.510914  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.510924  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:26.510931  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:26.511000  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:26.539513  447344 cri.go:89] found id: ""
	I1002 07:47:26.539538  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.539547  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:26.539553  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:26.539614  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:26.569497  447344 cri.go:89] found id: ""
	I1002 07:47:26.569523  447344 logs.go:282] 0 containers: []
	W1002 07:47:26.569533  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:26.569543  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:26.569554  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:26.606001  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:26.606036  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:26.639212  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:26.639241  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:26.759608  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:26.759648  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:26.776023  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:26.776061  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:26.843605  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:29.343844  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:29.353821  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:29.353896  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:29.384551  447344 cri.go:89] found id: ""
	I1002 07:47:29.384577  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.384587  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:29.384597  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:29.384654  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:29.410405  447344 cri.go:89] found id: ""
	I1002 07:47:29.410433  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.410442  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:29.410454  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:29.410530  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:29.436136  447344 cri.go:89] found id: ""
	I1002 07:47:29.436163  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.436172  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:29.436179  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:29.436245  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:29.468270  447344 cri.go:89] found id: ""
	I1002 07:47:29.468295  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.468311  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:29.468322  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:29.468384  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:29.499036  447344 cri.go:89] found id: ""
	I1002 07:47:29.499061  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.499070  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:29.499077  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:29.499164  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:29.526252  447344 cri.go:89] found id: ""
	I1002 07:47:29.526279  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.526295  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:29.526304  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:29.526364  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:29.552735  447344 cri.go:89] found id: ""
	I1002 07:47:29.552764  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.552773  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:29.552780  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:29.552856  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:29.579849  447344 cri.go:89] found id: ""
	I1002 07:47:29.579875  447344 logs.go:282] 0 containers: []
	W1002 07:47:29.579884  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:29.579894  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:29.579905  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:29.695358  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:29.695402  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:29.711633  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:29.711663  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:29.781134  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:29.781156  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:29.781179  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:29.817012  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:29.817049  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 07:47:29.456859  460071 pod_ready.go:94] pod "kube-apiserver-pause-422707" is "Ready"
	I1002 07:47:29.456883  460071 pod_ready.go:86] duration metric: took 1.505953302s for pod "kube-apiserver-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:29.460575  460071 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 07:47:31.466943  460071 pod_ready.go:104] pod "kube-controller-manager-pause-422707" is not "Ready", error: <nil>
	I1002 07:47:31.966632  460071 pod_ready.go:94] pod "kube-controller-manager-pause-422707" is "Ready"
	I1002 07:47:31.966659  460071 pod_ready.go:86] duration metric: took 2.506061859s for pod "kube-controller-manager-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:31.969027  460071 pod_ready.go:83] waiting for pod "kube-proxy-mjj7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:31.973792  460071 pod_ready.go:94] pod "kube-proxy-mjj7w" is "Ready"
	I1002 07:47:31.973819  460071 pod_ready.go:86] duration metric: took 4.766711ms for pod "kube-proxy-mjj7w" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:32.139215  460071 pod_ready.go:83] waiting for pod "kube-scheduler-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 07:47:34.145068  460071 pod_ready.go:104] pod "kube-scheduler-pause-422707" is not "Ready", error: <nil>
	I1002 07:47:34.646042  460071 pod_ready.go:94] pod "kube-scheduler-pause-422707" is "Ready"
	I1002 07:47:34.646071  460071 pod_ready.go:86] duration metric: took 2.506827373s for pod "kube-scheduler-pause-422707" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:47:34.646085  460071 pod_ready.go:40] duration metric: took 12.216302403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:47:34.707742  460071 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 07:47:34.710691  460071 out.go:179] * Done! kubectl is now configured to use "pause-422707" cluster and "default" namespace by default
	I1002 07:47:32.348641  447344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:47:32.359837  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 07:47:32.359913  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 07:47:32.389393  447344 cri.go:89] found id: ""
	I1002 07:47:32.389417  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.389426  447344 logs.go:284] No container was found matching "kube-apiserver"
	I1002 07:47:32.389433  447344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 07:47:32.389494  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 07:47:32.416857  447344 cri.go:89] found id: ""
	I1002 07:47:32.416881  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.416890  447344 logs.go:284] No container was found matching "etcd"
	I1002 07:47:32.416896  447344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 07:47:32.416960  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 07:47:32.448018  447344 cri.go:89] found id: ""
	I1002 07:47:32.448039  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.448048  447344 logs.go:284] No container was found matching "coredns"
	I1002 07:47:32.448057  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 07:47:32.448116  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 07:47:32.479836  447344 cri.go:89] found id: ""
	I1002 07:47:32.479861  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.479869  447344 logs.go:284] No container was found matching "kube-scheduler"
	I1002 07:47:32.479876  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 07:47:32.479945  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 07:47:32.506181  447344 cri.go:89] found id: ""
	I1002 07:47:32.506207  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.506217  447344 logs.go:284] No container was found matching "kube-proxy"
	I1002 07:47:32.506224  447344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 07:47:32.506317  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 07:47:32.539470  447344 cri.go:89] found id: ""
	I1002 07:47:32.539533  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.539551  447344 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 07:47:32.539558  447344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 07:47:32.539615  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 07:47:32.570033  447344 cri.go:89] found id: ""
	I1002 07:47:32.570064  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.570073  447344 logs.go:284] No container was found matching "kindnet"
	I1002 07:47:32.570080  447344 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 07:47:32.570139  447344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 07:47:32.595512  447344 cri.go:89] found id: ""
	I1002 07:47:32.595538  447344 logs.go:282] 0 containers: []
	W1002 07:47:32.595547  447344 logs.go:284] No container was found matching "storage-provisioner"
	I1002 07:47:32.595556  447344 logs.go:123] Gathering logs for kubelet ...
	I1002 07:47:32.595567  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 07:47:32.711274  447344 logs.go:123] Gathering logs for dmesg ...
	I1002 07:47:32.711315  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 07:47:32.727547  447344 logs.go:123] Gathering logs for describe nodes ...
	I1002 07:47:32.727577  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 07:47:32.798460  447344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 07:47:32.798492  447344 logs.go:123] Gathering logs for CRI-O ...
	I1002 07:47:32.798516  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 07:47:32.834943  447344 logs.go:123] Gathering logs for container status ...
	I1002 07:47:32.834981  447344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.559833598Z" level=info msg="Started container" PID=2348 containerID=b8364eff63eb27502280c15e72f050b391d6c48bdc1e0b15e12b991cbe65b4e2 description=kube-system/etcd-pause-422707/etcd id=b57f6ab4-acd3-4628-a9ce-a18b73e82f93 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcdfa6de9aae1cd0db1cceeb7e3403b10ac8e58a02eb8b8b16add2c6e91df41
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.560247304Z" level=info msg="Starting container: de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5" id=a9a3a530-41d4-4ec2-a1d4-207c10cf65fc name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.561576448Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.562148818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.569855239Z" level=info msg="Started container" PID=2356 containerID=de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5 description=kube-system/kube-proxy-mjj7w/kube-proxy id=a9a3a530-41d4-4ec2-a1d4-207c10cf65fc name=/runtime.v1.RuntimeService/StartContainer sandboxID=08ba7f83c7655621b6ed136ecc0b549f2fe59bd083c08c5d7f090180c947bf5f
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.570697978Z" level=info msg="Started container" PID=2358 containerID=d120fcee17433144b61042570d7426dbbea18ad38caae066f3c488e1d546fa5f description=kube-system/kindnet-gkbbj/kindnet-cni id=b7f81b13-6343-461c-9ce0-f4f988768ecb name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e49edcdb9062f1f98723c2a53af28d47363195bad4453dda0e2f1cce9614cfb
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.574399351Z" level=info msg="Started container" PID=2341 containerID=7417b7c7f3bfda98962f017b5a0510c9c2693d339c94453d0849e7de2eb9d8d4 description=kube-system/kube-apiserver-pause-422707/kube-apiserver id=f4b27314-91d3-4688-a866-1fb2fa099bcd name=/runtime.v1.RuntimeService/StartContainer sandboxID=897f0a0b9a69773bb7025d60de1ff8f36b9965c7b680797947fd1a2821c58483
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.605219257Z" level=info msg="Created container bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068: kube-system/coredns-66bc5c9577-5fglk/coredns" id=67ed32bf-3840-405e-9fce-c6608de167e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.605894611Z" level=info msg="Starting container: bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068" id=0fc12c14-75ea-44cd-bac4-1ff84e6eefd7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:47:15 pause-422707 crio[2053]: time="2025-10-02T07:47:15.608534313Z" level=info msg="Started container" PID=2389 containerID=bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068 description=kube-system/coredns-66bc5c9577-5fglk/coredns id=0fc12c14-75ea-44cd-bac4-1ff84e6eefd7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e97169008a73694d86720e20c126bed7990928193e04b6dc665b1021619b5c4
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.920475018Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.924075894Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.924112177Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.924136112Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.927024826Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.92706147Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.927109028Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.930376535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.930413031Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.930437155Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.933654897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.933692263Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.933716977Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.941169972Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 07:47:25 pause-422707 crio[2053]: time="2025-10-02T07:47:25.941205763Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	bd2ad8230b36a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   4e97169008a73       coredns-66bc5c9577-5fglk               kube-system
	d120fcee17433       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   25 seconds ago       Running             kindnet-cni               1                   7e49edcdb9062       kindnet-gkbbj                          kube-system
	de61fc1c61af2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   25 seconds ago       Running             kube-proxy                1                   08ba7f83c7655       kube-proxy-mjj7w                       kube-system
	b8364eff63eb2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   25 seconds ago       Running             etcd                      1                   3dcdfa6de9aae       etcd-pause-422707                      kube-system
	7417b7c7f3bfd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   25 seconds ago       Running             kube-apiserver            1                   897f0a0b9a697       kube-apiserver-pause-422707            kube-system
	f7bae3cd05925       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   25 seconds ago       Running             kube-scheduler            1                   d837912379b42       kube-scheduler-pause-422707            kube-system
	cdd11ede7258f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   25 seconds ago       Running             kube-controller-manager   1                   c99c141b49bed       kube-controller-manager-pause-422707   kube-system
	7779786dbfb40       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   39 seconds ago       Exited              coredns                   0                   4e97169008a73       coredns-66bc5c9577-5fglk               kube-system
	fff7fe0cc7b8b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   08ba7f83c7655       kube-proxy-mjj7w                       kube-system
	4c3b3cd93e322       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   7e49edcdb9062       kindnet-gkbbj                          kube-system
	905cd7e5dfd7e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   d837912379b42       kube-scheduler-pause-422707            kube-system
	e1049b358ad25       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   c99c141b49bed       kube-controller-manager-pause-422707   kube-system
	36a0edc3f91c5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   897f0a0b9a697       kube-apiserver-pause-422707            kube-system
	bf6dbc138db36       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   3dcdfa6de9aae       etcd-pause-422707                      kube-system
	
	
	==> coredns [7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49223 - 26368 "HINFO IN 4398010615553394589.7569334930726868245. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022762105s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd2ad8230b36a900ce2e1a29b1b8034616f748c947febfbfde97a91c24efb068] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42260 - 11084 "HINFO IN 4997789750218908676.854079443307176604. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017617775s
	
	
	==> describe nodes <==
	Name:               pause-422707
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-422707
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=pause-422707
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_46_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:46:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-422707
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:47:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:47:00 +0000   Thu, 02 Oct 2025 07:46:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:47:00 +0000   Thu, 02 Oct 2025 07:46:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:47:00 +0000   Thu, 02 Oct 2025 07:46:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:47:00 +0000   Thu, 02 Oct 2025 07:47:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-422707
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0dd4ccaadc142828afec45a8ed1f363
	  System UUID:                58298745-8344-40a9-8a8a-d872fb025589
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5fglk                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
	  kube-system                 etcd-pause-422707                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-gkbbj                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-422707             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-pause-422707    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-mjj7w                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-422707             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 79s                kube-proxy       
	  Normal   Starting                 19s                kube-proxy       
	  Normal   Starting                 94s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 94s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     93s (x8 over 94s)  kubelet          Node pause-422707 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    93s (x8 over 94s)  kubelet          Node pause-422707 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  93s (x8 over 94s)  kubelet          Node pause-422707 status is now: NodeHasSufficientMemory
	  Normal   Starting                 86s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 86s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node pause-422707 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node pause-422707 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node pause-422707 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                node-controller  Node pause-422707 event: Registered Node pause-422707 in Controller
	  Normal   NodeReady                40s                kubelet          Node pause-422707 status is now: NodeReady
	  Normal   RegisteredNode           16s                node-controller  Node pause-422707 event: Registered Node pause-422707 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:07] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:08] overlayfs: idmapped layers are currently not supported
	[  +3.056037] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:16] overlayfs: idmapped layers are currently not supported
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:30] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:31] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b8364eff63eb27502280c15e72f050b391d6c48bdc1e0b15e12b991cbe65b4e2] <==
	{"level":"warn","ts":"2025-10-02T07:47:19.008073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.029300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.052760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.072181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.090789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.108156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.129483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.144159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.172283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.191769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.227900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.258038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.273917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.291958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.313784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.342227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.352255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.369928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.387373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.405783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.431717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.494288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.495997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.547517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:47:19.641100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54236","server-name":"","error":"EOF"}
	
	
	==> etcd [bf6dbc138db362cfff432db0b771a54903e496cfb0cf5bd18097881ec91376c4] <==
	{"level":"warn","ts":"2025-10-02T07:46:11.256198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.267863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.287953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.326174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.346197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.358813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:46:11.448056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59680","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:47:05.778152Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T07:47:05.778224Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-422707","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-02T07:47:05.778328Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:47:05.778400Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:47:06.069786Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:47:06.069851Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-02T07:47:06.069923Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:47:06.069886Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:47:06.069946Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:47:06.069955Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:47:06.069937Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:47:06.069991Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:47:06.070002Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:47:06.070009Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:47:06.073443Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-02T07:47:06.073535Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:47:06.073569Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T07:47:06.073576Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-422707","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 07:47:40 up  2:30,  0 user,  load average: 2.23, 2.80, 2.31
	Linux pause-422707 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b] <==
	I1002 07:46:20.709352       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 07:46:20.710662       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 07:46:20.710884       1 main.go:148] setting mtu 1500 for CNI 
	I1002 07:46:20.710928       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 07:46:20.711131       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T07:46:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 07:46:20.912098       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 07:46:20.912125       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 07:46:20.912134       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 07:46:20.912693       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 07:46:50.912333       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 07:46:50.912455       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 07:46:50.913573       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 07:46:51.000207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1002 07:46:52.212801       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 07:46:52.212846       1 metrics.go:72] Registering metrics
	I1002 07:46:52.212907       1 controller.go:711] "Syncing nftables rules"
	I1002 07:47:00.912110       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 07:47:00.912164       1 main.go:301] handling current node
	
	
	==> kindnet [d120fcee17433144b61042570d7426dbbea18ad38caae066f3c488e1d546fa5f] <==
	I1002 07:47:15.709451       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 07:47:15.709668       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 07:47:15.709794       1 main.go:148] setting mtu 1500 for CNI 
	I1002 07:47:15.709856       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 07:47:15.709896       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T07:47:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 07:47:15.918427       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 07:47:15.927166       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 07:47:15.927294       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 07:47:15.927472       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 07:47:21.129448       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 07:47:21.129871       1 metrics.go:72] Registering metrics
	I1002 07:47:21.129975       1 controller.go:711] "Syncing nftables rules"
	I1002 07:47:25.920048       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 07:47:25.920133       1 main.go:301] handling current node
	I1002 07:47:35.918455       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 07:47:35.918504       1 main.go:301] handling current node
	
	
	==> kube-apiserver [36a0edc3f91c599e64798a3222fc111e434ab4a719442e7564de7ee2187ca26a] <==
	W1002 07:47:05.797301       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797348       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797390       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797458       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797518       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797564       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797621       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797682       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797770       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797815       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797854       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797897       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797941       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.797987       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.798030       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.798074       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.798118       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.798866       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799602       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799712       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799799       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799883       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.799966       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.800322       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 07:47:05.803578       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7417b7c7f3bfda98962f017b5a0510c9c2693d339c94453d0849e7de2eb9d8d4] <==
	I1002 07:47:21.011941       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 07:47:21.041824       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:47:21.065836       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 07:47:21.066164       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 07:47:21.073021       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:47:21.073112       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:47:21.073288       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 07:47:21.073351       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:47:21.073444       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 07:47:21.073512       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 07:47:21.073580       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 07:47:21.073646       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 07:47:21.092104       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:47:21.092268       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:47:21.093156       1 aggregator.go:171] initial CRD sync complete...
	I1002 07:47:21.093719       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:47:21.093777       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:47:21.093807       1 cache.go:39] Caches are synced for autoregister controller
	E1002 07:47:21.114671       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 07:47:21.692788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:47:22.903367       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 07:47:24.336249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:47:24.582057       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:47:24.635308       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:47:24.684518       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [cdd11ede7258ff6809046b22ade252d706e70a12ce550aebbe4814c12e32f694] <==
	I1002 07:47:24.298785       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 07:47:24.320033       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:47:24.320113       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:47:24.320149       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:47:24.320163       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:47:24.320170       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:47:24.322794       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:47:24.322966       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:47:24.323142       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-422707"
	I1002 07:47:24.323228       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 07:47:24.324912       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:47:24.325454       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 07:47:24.325565       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 07:47:24.327144       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 07:47:24.327239       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 07:47:24.327289       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 07:47:24.327678       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:47:24.328928       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:47:24.329011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:47:24.330509       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:47:24.331639       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:47:24.335696       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 07:47:24.337068       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:47:24.352511       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 07:47:24.355647       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	
	
	==> kube-controller-manager [e1049b358ad259731384916f35ccf90b48b850267f7aed64a45d9db512a3a6d2] <==
	I1002 07:46:19.267852       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:46:19.268079       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 07:46:19.268105       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 07:46:19.268162       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 07:46:19.272725       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 07:46:19.284937       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 07:46:19.290928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:46:19.295386       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 07:46:19.300739       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:46:19.300922       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:46:19.300986       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:46:19.301036       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:46:19.301078       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:46:19.308837       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:46:19.308964       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:46:19.309844       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 07:46:19.309924       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 07:46:19.315874       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:46:19.316023       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:46:19.316586       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-422707"
	I1002 07:46:19.316700       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 07:46:19.326623       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 07:46:19.338446       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-422707" podCIDRs=["10.244.0.0/24"]
	I1002 07:46:19.368130       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:47:04.323968       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [de61fc1c61af20cceeee6e8c3ff2c66f1d72b4eff29e7df072f688c447638dc5] <==
	I1002 07:47:18.764535       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:47:19.433666       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:47:21.035198       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:47:21.035245       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 07:47:21.035312       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:47:21.339724       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:47:21.346330       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:47:21.352626       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:47:21.352982       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:47:21.353171       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:47:21.354452       1 config.go:200] "Starting service config controller"
	I1002 07:47:21.354514       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:47:21.354555       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:47:21.354593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:47:21.354632       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:47:21.354658       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:47:21.375171       1 config.go:309] "Starting node config controller"
	I1002 07:47:21.427763       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:47:21.427842       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:47:21.454845       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:47:21.454931       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:47:21.454959       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e] <==
	I1002 07:46:20.699831       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:46:20.792936       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:46:20.893600       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:46:20.893638       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 07:46:20.893723       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:46:20.915793       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:46:20.915914       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:46:20.920762       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:46:20.921170       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:46:20.921245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:46:20.922500       1 config.go:200] "Starting service config controller"
	I1002 07:46:20.922572       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:46:20.922621       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:46:20.922655       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:46:20.922704       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:46:20.922729       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:46:20.923662       1 config.go:309] "Starting node config controller"
	I1002 07:46:20.923730       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:46:20.923761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:46:21.023663       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:46:21.023772       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:46:21.023801       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [905cd7e5dfd7ea9891c435d909e83a9b93ede8e42ba50c4ca101e96e91b91bcd] <==
	E1002 07:46:12.303191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:46:12.303231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:46:12.303307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:46:12.303363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:46:12.303478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:46:12.303552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:46:12.303611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:46:13.182809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:46:13.194456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:46:13.216402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:46:13.234767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:46:13.257297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:46:13.257579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:46:13.323423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:46:13.383333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:46:13.387735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:46:13.435376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:46:13.590781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1002 07:46:15.177974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:47:05.785457       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:47:05.786046       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 07:47:05.786110       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 07:47:05.786161       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 07:47:05.786237       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 07:47:05.786286       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f7bae3cd05925ab12ba039c66e40c1c68b06fd8f8c2effc0320d367c8336d488] <==
	I1002 07:47:19.283430       1 serving.go:386] Generated self-signed cert in-memory
	I1002 07:47:22.182276       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 07:47:22.182320       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:47:22.188237       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 07:47:22.188336       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 07:47:22.188409       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:47:22.188447       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:47:22.188485       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:47:22.188518       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:47:22.188647       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:47:22.188717       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:47:22.288786       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 07:47:22.288949       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:47:22.288946       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.399561    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="557ebaeaf415604bade03417d103c013" pod="kube-system/kube-controller-manager-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: I1002 07:47:15.420635    1306 scope.go:117] "RemoveContainer" containerID="fff7fe0cc7b8b2200c8f3298384331b60916e87b46e04f1d6751ac804e1bd38e"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.421191    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="12b693a2a055e251c1b61556927a30a4" pod="kube-system/kube-scheduler-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.421394    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44636457c2acad4cb2d7258f7377957e" pod="kube-system/kube-apiserver-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.421571    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e8a1a47dce612b67e76b131801e7387" pod="kube-system/etcd-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.421741    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="557ebaeaf415604bade03417d103c013" pod="kube-system/kube-controller-manager-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.422106    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjj7w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e1cddb37-a181-4f8e-b71c-e8240c6269c6" pod="kube-system/kube-proxy-mjj7w"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: I1002 07:47:15.447275    1306 scope.go:117] "RemoveContainer" containerID="4c3b3cd93e322872b86d37772d4707046419be26c02a2e63639ac63fef43bb5b"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.448495    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44636457c2acad4cb2d7258f7377957e" pod="kube-system/kube-apiserver-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.448860    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e8a1a47dce612b67e76b131801e7387" pod="kube-system/etcd-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.449464    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="557ebaeaf415604bade03417d103c013" pod="kube-system/kube-controller-manager-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.450093    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gkbbj\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="409e91ec-a4dc-47dd-9b39-6ddf23e0dad3" pod="kube-system/kindnet-gkbbj"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.450846    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjj7w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e1cddb37-a181-4f8e-b71c-e8240c6269c6" pod="kube-system/kube-proxy-mjj7w"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.451383    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="12b693a2a055e251c1b61556927a30a4" pod="kube-system/kube-scheduler-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.479693    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gkbbj\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="409e91ec-a4dc-47dd-9b39-6ddf23e0dad3" pod="kube-system/kindnet-gkbbj"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.479903    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjj7w\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e1cddb37-a181-4f8e-b71c-e8240c6269c6" pod="kube-system/kube-proxy-mjj7w"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480085    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-5fglk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="db096af0-568e-459a-b2a9-3139e8957c8a" pod="kube-system/coredns-66bc5c9577-5fglk"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480254    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="12b693a2a055e251c1b61556927a30a4" pod="kube-system/kube-scheduler-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480410    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44636457c2acad4cb2d7258f7377957e" pod="kube-system/kube-apiserver-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480557    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e8a1a47dce612b67e76b131801e7387" pod="kube-system/etcd-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: E1002 07:47:15.480717    1306 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-422707\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="557ebaeaf415604bade03417d103c013" pod="kube-system/kube-controller-manager-pause-422707"
	Oct 02 07:47:15 pause-422707 kubelet[1306]: I1002 07:47:15.480790    1306 scope.go:117] "RemoveContainer" containerID="7779786dbfb40f2436252d55263d5b88b48a937678c675a5ec383b2da42c5be2"
	Oct 02 07:47:35 pause-422707 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 07:47:35 pause-422707 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 07:47:35 pause-422707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-422707 -n pause-422707
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-422707 -n pause-422707: exit status 2 (418.532277ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-422707 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (825.482279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:00:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-356986 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-356986 describe deploy/metrics-server -n kube-system: exit status 1 (271.942485ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-356986 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-356986
helpers_test.go:243: (dbg) docker inspect old-k8s-version-356986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85",
	        "Created": "2025-10-02T07:58:57.889195486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 480883,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:58:57.969775241Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/hosts",
	        "LogPath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85-json.log",
	        "Name": "/old-k8s-version-356986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85",
	                "LowerDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356986",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356986",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "486cd96a7ad0c9df621f846d3c2c9c8060bba1b3ee3731b5dbc9150d254ff35a",
	            "SandboxKey": "/var/run/docker/netns/486cd96a7ad0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:90:af:71:6d:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6148cfa20f53b0003f798fe96a07d1b1fb1d274fc1a1b8a6f3f1e34c962a644",
	                    "EndpointID": "8ce42d33041a65b7548717299e8955b9e49e9f8249359021e244fca286771685",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-356986",
	                        "3e0fd1abc9e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-356986 -n old-k8s-version-356986
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-356986 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-356986 logs -n 25: (1.680943812s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-810803 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo containerd config dump                                                                                                                                                                                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo crio config                                                                                                                                                                                                             │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ delete  │ -p cilium-810803                                                                                                                                                                                                                              │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │ 02 Oct 25 07:49 UTC │
	│ start   │ -p force-systemd-env-297062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ force-systemd-flag-275910 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-flag-275910                                                                                                                                                                                                                  │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-env-297062                                                                                                                                                                                                                   │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p cert-options-654417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ cert-options-654417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ -p cert-options-654417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ delete  │ -p cert-options-654417                                                                                                                                                                                                                        │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:59 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:59:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:59:53.404136  483181 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:59:53.404258  483181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:59:53.404262  483181 out.go:374] Setting ErrFile to fd 2...
	I1002 07:59:53.404267  483181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:59:53.404608  483181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:59:53.405005  483181 out.go:368] Setting JSON to false
	I1002 07:59:53.406523  483181 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9745,"bootTime":1759382249,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:59:53.406588  483181 start.go:140] virtualization:  
	I1002 07:59:53.411292  483181 out.go:179] * [cert-expiration-759246] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:59:53.414931  483181 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:59:53.415065  483181 notify.go:220] Checking for updates...
	I1002 07:59:53.421379  483181 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:59:53.423704  483181 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:59:53.426808  483181 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:59:53.429857  483181 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:59:53.432773  483181 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:59:53.436390  483181 config.go:182] Loaded profile config "cert-expiration-759246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:59:53.436945  483181 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:59:53.469404  483181 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:59:53.469525  483181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:59:53.530608  483181 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:59:53.520555519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:59:53.530702  483181 docker.go:318] overlay module found
	I1002 07:59:53.533951  483181 out.go:179] * Using the docker driver based on existing profile
	I1002 07:59:53.536838  483181 start.go:304] selected driver: docker
	I1002 07:59:53.536849  483181 start.go:924] validating driver "docker" against &{Name:cert-expiration-759246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-759246 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:59:53.536942  483181 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:59:53.537719  483181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:59:53.602926  483181 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:59:53.592361091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:59:53.603386  483181 cni.go:84] Creating CNI manager for ""
	I1002 07:59:53.603450  483181 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 07:59:53.603493  483181 start.go:348] cluster config:
	{Name:cert-expiration-759246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-759246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1002 07:59:53.606734  483181 out.go:179] * Starting "cert-expiration-759246" primary control-plane node in "cert-expiration-759246" cluster
	I1002 07:59:53.609678  483181 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 07:59:53.612534  483181 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:59:53.615384  483181 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 07:59:53.615437  483181 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 07:59:53.615445  483181 cache.go:58] Caching tarball of preloaded images
	I1002 07:59:53.615467  483181 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:59:53.615607  483181 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 07:59:53.615617  483181 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 07:59:53.615745  483181 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/cert-expiration-759246/config.json ...
	I1002 07:59:53.638864  483181 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:59:53.638876  483181 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:59:53.638897  483181 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:59:53.638925  483181 start.go:360] acquireMachinesLock for cert-expiration-759246: {Name:mk9124d9c2087dfeb6c28c0c613ea0a41bf56f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:59:53.638994  483181 start.go:364] duration metric: took 52.406µs to acquireMachinesLock for "cert-expiration-759246"
	I1002 07:59:53.639016  483181 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:59:53.639030  483181 fix.go:54] fixHost starting: 
	I1002 07:59:53.639339  483181 cli_runner.go:164] Run: docker container inspect cert-expiration-759246 --format={{.State.Status}}
	I1002 07:59:53.657177  483181 fix.go:112] recreateIfNeeded on cert-expiration-759246: state=Running err=<nil>
	W1002 07:59:53.657198  483181 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:59:53.660570  483181 out.go:252] * Updating the running docker "cert-expiration-759246" container ...
	I1002 07:59:53.660600  483181 machine.go:93] provisionDockerMachine start ...
	I1002 07:59:53.660693  483181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:59:53.678240  483181 main.go:141] libmachine: Using SSH client type: native
	I1002 07:59:53.678565  483181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1002 07:59:53.678575  483181 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:59:53.810849  483181 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-759246
	
	I1002 07:59:53.810863  483181 ubuntu.go:182] provisioning hostname "cert-expiration-759246"
	I1002 07:59:53.810925  483181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:59:53.829563  483181 main.go:141] libmachine: Using SSH client type: native
	I1002 07:59:53.829869  483181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1002 07:59:53.829879  483181 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-759246 && echo "cert-expiration-759246" | sudo tee /etc/hostname
	I1002 07:59:53.973164  483181 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-759246
	
	I1002 07:59:53.973246  483181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:59:53.994585  483181 main.go:141] libmachine: Using SSH client type: native
	I1002 07:59:53.994894  483181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1002 07:59:53.994909  483181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-759246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-759246/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-759246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:59:54.135558  483181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:59:54.135574  483181 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 07:59:54.135591  483181 ubuntu.go:190] setting up certificates
	I1002 07:59:54.135600  483181 provision.go:84] configureAuth start
	I1002 07:59:54.135676  483181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-759246
	I1002 07:59:54.154364  483181 provision.go:143] copyHostCerts
	I1002 07:59:54.154428  483181 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 07:59:54.154446  483181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 07:59:54.154524  483181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 07:59:54.154727  483181 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 07:59:54.154741  483181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 07:59:54.154776  483181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 07:59:54.154844  483181 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 07:59:54.154847  483181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 07:59:54.154871  483181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 07:59:54.154968  483181 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-759246 san=[127.0.0.1 192.168.85.2 cert-expiration-759246 localhost minikube]
	I1002 07:59:54.208253  483181 provision.go:177] copyRemoteCerts
	I1002 07:59:54.208316  483181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:59:54.208357  483181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:59:54.235220  483181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/cert-expiration-759246/id_rsa Username:docker}
	I1002 07:59:54.331528  483181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:59:54.350926  483181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 07:59:54.369520  483181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 07:59:54.388714  483181 provision.go:87] duration metric: took 253.090078ms to configureAuth
	I1002 07:59:54.388733  483181 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:59:54.388926  483181 config.go:182] Loaded profile config "cert-expiration-759246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:59:54.389032  483181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-759246
	I1002 07:59:54.406951  483181 main.go:141] libmachine: Using SSH client type: native
	I1002 07:59:54.407276  483181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1002 07:59:54.407288  483181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Oct 02 07:59:48 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:48.527628537Z" level=info msg="Starting container: 52cf739434cfef7b60fe1d345900608a5d558c17c9c4eed20728d6284bb76305" id=c36d7849-8adc-4d86-843e-fd16d9674fea name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:59:48 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:48.531859599Z" level=info msg="Started container" PID=1903 containerID=401d6e192953496e07a7ada74f3f861912c2b8a1b7bbdb60d1458d7da588c1a4 description=kube-system/storage-provisioner/storage-provisioner id=1bedac56-1e3e-421c-bc64-b05e84f9ac01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e53e517aa8e70e54dc38ed290c714f0fe7a5adfb03bac7963e82bab1d1218bde
	Oct 02 07:59:48 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:48.537741852Z" level=info msg="Started container" PID=1907 containerID=52cf739434cfef7b60fe1d345900608a5d558c17c9c4eed20728d6284bb76305 description=kube-system/coredns-5dd5756b68-rcxgd/coredns id=c36d7849-8adc-4d86-843e-fd16d9674fea name=/runtime.v1.RuntimeService/StartContainer sandboxID=15b226ff975839082b1f28b7573aee2d11402c2edc9925a0ec3f7fe9d46cc9ef
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.444363404Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0baaf3b0-3c44-47e8-8621-250d11a560ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.444446473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.449697409Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:06e7454a54c5fa78e7ca7c7006af0c1bc3f307122db59d8bd9e61f7a9722e57c UID:511ea254-6098-48d3-9677-8672c1681171 NetNS:/var/run/netns/f4dad904-9fdc-40b6-9867-8d3ea2b27794 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40026cc630}] Aliases:map[]}"
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.449737287Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.458919886Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:06e7454a54c5fa78e7ca7c7006af0c1bc3f307122db59d8bd9e61f7a9722e57c UID:511ea254-6098-48d3-9677-8672c1681171 NetNS:/var/run/netns/f4dad904-9fdc-40b6-9867-8d3ea2b27794 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40026cc630}] Aliases:map[]}"
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.459552441Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.463405063Z" level=info msg="Ran pod sandbox 06e7454a54c5fa78e7ca7c7006af0c1bc3f307122db59d8bd9e61f7a9722e57c with infra container: default/busybox/POD" id=0baaf3b0-3c44-47e8-8621-250d11a560ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.465430608Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=534658e8-d447-4284-83c9-0172d5081d72 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.465656431Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=534658e8-d447-4284-83c9-0172d5081d72 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.465756978Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=534658e8-d447-4284-83c9-0172d5081d72 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.466788454Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7235bf67-e956-484b-a1eb-c06c67736f90 name=/runtime.v1.ImageService/PullImage
	Oct 02 07:59:51 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:51.469241761Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 07:59:53 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:53.374108099Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=7235bf67-e956-484b-a1eb-c06c67736f90 name=/runtime.v1.ImageService/PullImage
	Oct 02 07:59:53 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:53.377151647Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7816cd61-5632-45bd-b496-5f0dc6e1ba82 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 07:59:53 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:53.383849792Z" level=info msg="Creating container: default/busybox/busybox" id=13f9fc09-0135-4b88-9126-9cafac7a38ec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:59:53 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:53.384623846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:59:53 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:53.389895304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:59:53 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:53.390400949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 07:59:53 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:53.423956391Z" level=info msg="Created container 5777657a8b60a322feac7ea5d86c9c415568dad45e898f6df490936c95cb2181: default/busybox/busybox" id=13f9fc09-0135-4b88-9126-9cafac7a38ec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 07:59:53 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:53.42755836Z" level=info msg="Starting container: 5777657a8b60a322feac7ea5d86c9c415568dad45e898f6df490936c95cb2181" id=357eea0e-6893-419d-9ea0-29b86feb831f name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 07:59:53 old-k8s-version-356986 crio[835]: time="2025-10-02T07:59:53.431393431Z" level=info msg="Started container" PID=1965 containerID=5777657a8b60a322feac7ea5d86c9c415568dad45e898f6df490936c95cb2181 description=default/busybox/busybox id=357eea0e-6893-419d-9ea0-29b86feb831f name=/runtime.v1.RuntimeService/StartContainer sandboxID=06e7454a54c5fa78e7ca7c7006af0c1bc3f307122db59d8bd9e61f7a9722e57c
	Oct 02 08:00:00 old-k8s-version-356986 crio[835]: time="2025-10-02T08:00:00.818443633Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	5777657a8b60a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   10 seconds ago      Running             busybox                   0                   06e7454a54c5f       busybox                                          default
	52cf739434cfe       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      15 seconds ago      Running             coredns                   0                   15b226ff97583       coredns-5dd5756b68-rcxgd                         kube-system
	401d6e1929534       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago      Running             storage-provisioner       0                   e53e517aa8e70       storage-provisioner                              kube-system
	c5ace114ef2b5       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   c77d932aca9c9       kindnet-h7blk                                    kube-system
	39fcd4f31c090       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      29 seconds ago      Running             kube-proxy                0                   0c181d7bc0323       kube-proxy-8ds6v                                 kube-system
	e96a21e6acd8e       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      49 seconds ago      Running             kube-controller-manager   0                   7cda5603e6948       kube-controller-manager-old-k8s-version-356986   kube-system
	1b3f6de8e6749       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      49 seconds ago      Running             kube-scheduler            0                   e5e6695e256bd       kube-scheduler-old-k8s-version-356986            kube-system
	30d2911d4deaa       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      49 seconds ago      Running             etcd                      0                   4229bc39657fb       etcd-old-k8s-version-356986                      kube-system
	3cca2b91d116e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      49 seconds ago      Running             kube-apiserver            0                   e7183e77446fb       kube-apiserver-old-k8s-version-356986            kube-system
	
	
	==> coredns [52cf739434cfef7b60fe1d345900608a5d558c17c9c4eed20728d6284bb76305] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59296 - 12384 "HINFO IN 2453091886556295629.6440803019444059344. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021632796s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-356986
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-356986
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=old-k8s-version-356986
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:59:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-356986
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:00:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:59:52 +0000   Thu, 02 Oct 2025 07:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:59:52 +0000   Thu, 02 Oct 2025 07:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:59:52 +0000   Thu, 02 Oct 2025 07:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:59:52 +0000   Thu, 02 Oct 2025 07:59:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-356986
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea33fab1d89c42d5b83080e77f3179b9
	  System UUID:                35f9767f-9ab2-47f0-8d89-175f1127470c
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-rcxgd                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-old-k8s-version-356986                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         44s
	  kube-system                 kindnet-h7blk                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-356986             250m (12%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-old-k8s-version-356986    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-8ds6v                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-356986             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-356986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientPID
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-356986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node old-k8s-version-356986 event: Registered Node old-k8s-version-356986 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-356986 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.690454] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:30] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:31] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [30d2911d4deaa31b3552538ec71160a21a7c3d4ff0338f21d3b72e0ae29ef6f2] <==
	{"level":"info","ts":"2025-10-02T07:59:14.642815Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T07:59:14.653501Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T07:59:14.642395Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T07:59:14.649587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-02T07:59:14.653718Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T07:59:14.65382Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T07:59:14.654272Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-02T07:59:15.517858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-02T07:59:15.517921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-02T07:59:15.517939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-02T07:59:15.517953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-02T07:59:15.517959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-02T07:59:15.517969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-02T07:59:15.518002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-02T07:59:15.519825Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-356986 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T07:59:15.51987Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T07:59:15.520055Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T07:59:15.520744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T07:59:15.520815Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-02T07:59:15.52086Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T07:59:15.5209Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-02T07:59:15.521282Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T07:59:15.521401Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T07:59:15.525131Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T07:59:15.522304Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:00:04 up  2:42,  0 user,  load average: 2.42, 1.30, 1.56
	Linux old-k8s-version-356986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c5ace114ef2b5ee85685f58c450447f748f2df98a455b751093880714db34ebe] <==
	I1002 07:59:37.604213       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 07:59:37.604435       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 07:59:37.604552       1 main.go:148] setting mtu 1500 for CNI 
	I1002 07:59:37.604571       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 07:59:37.604580       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T07:59:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 07:59:37.807447       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 07:59:37.807478       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 07:59:37.807488       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 07:59:37.807823       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 07:59:38.108482       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 07:59:38.108511       1 metrics.go:72] Registering metrics
	I1002 07:59:38.108587       1 controller.go:711] "Syncing nftables rules"
	I1002 07:59:47.811850       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 07:59:47.811905       1 main.go:301] handling current node
	I1002 07:59:57.808614       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 07:59:57.808649       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3cca2b91d116ef4756cc050417d46d9ce275a0e2d16d257bb817b309c1f5fb73] <==
	E1002 07:59:18.358967       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","exempt","catch-all","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1002 07:59:18.374769       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","workload-low","global-default","system","node-high","leader-election","workload-high","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1002 07:59:18.402110       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","exempt","catch-all","global-default","system"] items=[{},{},{},{},{},{},{},{}]
	E1002 07:59:18.440336       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1002 07:59:18.457676       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E1002 07:59:18.460897       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I1002 07:59:19.010878       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 07:59:19.017039       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 07:59:19.017125       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:59:19.662846       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:59:19.711027       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:59:19.814128       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 07:59:19.821635       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 07:59:19.822837       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 07:59:19.827876       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:59:20.186548       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 07:59:21.638665       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 07:59:21.652201       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 07:59:21.668852       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	E1002 07:59:28.203249       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I1002 07:59:33.547155       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1002 07:59:33.696902       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E1002 07:59:38.203691       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1002 07:59:48.204666       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1002 07:59:58.205608       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [e96a21e6acd8ec21b1eeaddeb4e33f48b366e21de9ab19281f3114bc52428901] <==
	I1002 07:59:33.071653       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1002 07:59:33.073787       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 07:59:33.142800       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 07:59:33.472496       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 07:59:33.538058       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 07:59:33.538090       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 07:59:33.558611       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8ds6v"
	I1002 07:59:33.563780       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-h7blk"
	I1002 07:59:33.703265       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1002 07:59:34.099886       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-v8jdf"
	I1002 07:59:34.119818       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rcxgd"
	I1002 07:59:34.176495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="473.808746ms"
	I1002 07:59:34.224712       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.109699ms"
	I1002 07:59:34.224807       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.853µs"
	I1002 07:59:34.264185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.738µs"
	I1002 07:59:35.283387       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1002 07:59:35.327284       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-v8jdf"
	I1002 07:59:35.348497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.600923ms"
	I1002 07:59:35.371124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.538085ms"
	I1002 07:59:35.371216       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.614µs"
	I1002 07:59:48.151550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.347µs"
	I1002 07:59:48.175350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.707µs"
	I1002 07:59:49.063130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.472397ms"
	I1002 07:59:49.063243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.75µs"
	I1002 07:59:52.933178       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [39fcd4f31c090cb9aa35ed627420e81a360a53c117fd2258b06d5326c1681928] <==
	I1002 07:59:34.645615       1 server_others.go:69] "Using iptables proxy"
	I1002 07:59:34.665660       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1002 07:59:34.728297       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:59:34.730390       1 server_others.go:152] "Using iptables Proxier"
	I1002 07:59:34.730428       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 07:59:34.730434       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 07:59:34.730466       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 07:59:34.731668       1 server.go:846] "Version info" version="v1.28.0"
	I1002 07:59:34.731683       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:59:34.744237       1 config.go:188] "Starting service config controller"
	I1002 07:59:34.744279       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 07:59:34.744302       1 config.go:97] "Starting endpoint slice config controller"
	I1002 07:59:34.744306       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 07:59:34.744776       1 config.go:315] "Starting node config controller"
	I1002 07:59:34.744784       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 07:59:34.845299       1 shared_informer.go:318] Caches are synced for node config
	I1002 07:59:34.845335       1 shared_informer.go:318] Caches are synced for service config
	I1002 07:59:34.845364       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1b3f6de8e67493bd4c6458e1c8b56652bdddd025f8fe74ee47404639376651e8] <==
	W1002 07:59:18.171275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 07:59:18.171287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1002 07:59:18.171354       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 07:59:18.171370       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 07:59:18.171444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 07:59:18.171463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 07:59:18.999387       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 07:59:18.999431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 07:59:19.011353       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 07:59:19.011449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 07:59:19.039729       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 07:59:19.039840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 07:59:19.256785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 07:59:19.256912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 07:59:19.267290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 07:59:19.267395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 07:59:19.299426       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 07:59:19.299465       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 07:59:19.374180       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 07:59:19.374232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 07:59:19.376995       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 07:59:19.377039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 07:59:19.435575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 07:59:19.435680       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1002 07:59:21.444452       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: I1002 07:59:33.669197    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd6f4e26-b3d0-4f9d-9a24-82a9be803571-lib-modules\") pod \"kindnet-h7blk\" (UID: \"dd6f4e26-b3d0-4f9d-9a24-82a9be803571\") " pod="kube-system/kindnet-h7blk"
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: I1002 07:59:33.669220    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbg2p\" (UniqueName: \"kubernetes.io/projected/dd6f4e26-b3d0-4f9d-9a24-82a9be803571-kube-api-access-wbg2p\") pod \"kindnet-h7blk\" (UID: \"dd6f4e26-b3d0-4f9d-9a24-82a9be803571\") " pod="kube-system/kindnet-h7blk"
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: I1002 07:59:33.669250    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59331def-12d1-49a1-9948-c559d336e730-lib-modules\") pod \"kube-proxy-8ds6v\" (UID: \"59331def-12d1-49a1-9948-c559d336e730\") " pod="kube-system/kube-proxy-8ds6v"
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: I1002 07:59:33.669280    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dd6f4e26-b3d0-4f9d-9a24-82a9be803571-cni-cfg\") pod \"kindnet-h7blk\" (UID: \"dd6f4e26-b3d0-4f9d-9a24-82a9be803571\") " pod="kube-system/kindnet-h7blk"
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: E1002 07:59:33.781102    1356 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: E1002 07:59:33.781146    1356 projected.go:198] Error preparing data for projected volume kube-api-access-wbg2p for pod kube-system/kindnet-h7blk: configmap "kube-root-ca.crt" not found
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: E1002 07:59:33.781220    1356 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd6f4e26-b3d0-4f9d-9a24-82a9be803571-kube-api-access-wbg2p podName:dd6f4e26-b3d0-4f9d-9a24-82a9be803571 nodeName:}" failed. No retries permitted until 2025-10-02 07:59:34.281195663 +0000 UTC m=+12.674676316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wbg2p" (UniqueName: "kubernetes.io/projected/dd6f4e26-b3d0-4f9d-9a24-82a9be803571-kube-api-access-wbg2p") pod "kindnet-h7blk" (UID: "dd6f4e26-b3d0-4f9d-9a24-82a9be803571") : configmap "kube-root-ca.crt" not found
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: E1002 07:59:33.781906    1356 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: E1002 07:59:33.782053    1356 projected.go:198] Error preparing data for projected volume kube-api-access-thb8s for pod kube-system/kube-proxy-8ds6v: configmap "kube-root-ca.crt" not found
	Oct 02 07:59:33 old-k8s-version-356986 kubelet[1356]: E1002 07:59:33.782177    1356 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59331def-12d1-49a1-9948-c559d336e730-kube-api-access-thb8s podName:59331def-12d1-49a1-9948-c559d336e730 nodeName:}" failed. No retries permitted until 2025-10-02 07:59:34.282157534 +0000 UTC m=+12.675638195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-thb8s" (UniqueName: "kubernetes.io/projected/59331def-12d1-49a1-9948-c559d336e730-kube-api-access-thb8s") pod "kube-proxy-8ds6v" (UID: "59331def-12d1-49a1-9948-c559d336e730") : configmap "kube-root-ca.crt" not found
	Oct 02 07:59:34 old-k8s-version-356986 kubelet[1356]: W1002 07:59:34.484735    1356 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/crio-0c181d7bc0323966a257e2d843dd9e59bf197f5b70f899187e4667af12344d3d WatchSource:0}: Error finding container 0c181d7bc0323966a257e2d843dd9e59bf197f5b70f899187e4667af12344d3d: Status 404 returned error can't find the container with id 0c181d7bc0323966a257e2d843dd9e59bf197f5b70f899187e4667af12344d3d
	Oct 02 07:59:37 old-k8s-version-356986 kubelet[1356]: I1002 07:59:37.999398    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8ds6v" podStartSLOduration=4.9993529 podCreationTimestamp="2025-10-02 07:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:59:34.998867809 +0000 UTC m=+13.392348470" watchObservedRunningTime="2025-10-02 07:59:37.9993529 +0000 UTC m=+16.392833553"
	Oct 02 07:59:41 old-k8s-version-356986 kubelet[1356]: I1002 07:59:41.837701    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-h7blk" podStartSLOduration=5.8583898229999996 podCreationTimestamp="2025-10-02 07:59:33 +0000 UTC" firstStartedPulling="2025-10-02 07:59:34.526363239 +0000 UTC m=+12.919843891" lastFinishedPulling="2025-10-02 07:59:37.505631537 +0000 UTC m=+15.899112189" observedRunningTime="2025-10-02 07:59:38.001375466 +0000 UTC m=+16.394856127" watchObservedRunningTime="2025-10-02 07:59:41.837658121 +0000 UTC m=+20.231138782"
	Oct 02 07:59:48 old-k8s-version-356986 kubelet[1356]: I1002 07:59:48.116224    1356 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 02 07:59:48 old-k8s-version-356986 kubelet[1356]: I1002 07:59:48.149141    1356 topology_manager.go:215] "Topology Admit Handler" podUID="c8338f85-9518-4ede-a9a8-5d7d2a31770b" podNamespace="kube-system" podName="coredns-5dd5756b68-rcxgd"
	Oct 02 07:59:48 old-k8s-version-356986 kubelet[1356]: I1002 07:59:48.153934    1356 topology_manager.go:215] "Topology Admit Handler" podUID="e762d10a-80a8-4e4b-8b16-08e5f6fd1012" podNamespace="kube-system" podName="storage-provisioner"
	Oct 02 07:59:48 old-k8s-version-356986 kubelet[1356]: I1002 07:59:48.179345    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8338f85-9518-4ede-a9a8-5d7d2a31770b-config-volume\") pod \"coredns-5dd5756b68-rcxgd\" (UID: \"c8338f85-9518-4ede-a9a8-5d7d2a31770b\") " pod="kube-system/coredns-5dd5756b68-rcxgd"
	Oct 02 07:59:48 old-k8s-version-356986 kubelet[1356]: I1002 07:59:48.179606    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e762d10a-80a8-4e4b-8b16-08e5f6fd1012-tmp\") pod \"storage-provisioner\" (UID: \"e762d10a-80a8-4e4b-8b16-08e5f6fd1012\") " pod="kube-system/storage-provisioner"
	Oct 02 07:59:48 old-k8s-version-356986 kubelet[1356]: I1002 07:59:48.179703    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5l7h\" (UniqueName: \"kubernetes.io/projected/c8338f85-9518-4ede-a9a8-5d7d2a31770b-kube-api-access-c5l7h\") pod \"coredns-5dd5756b68-rcxgd\" (UID: \"c8338f85-9518-4ede-a9a8-5d7d2a31770b\") " pod="kube-system/coredns-5dd5756b68-rcxgd"
	Oct 02 07:59:48 old-k8s-version-356986 kubelet[1356]: I1002 07:59:48.179741    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcg4h\" (UniqueName: \"kubernetes.io/projected/e762d10a-80a8-4e4b-8b16-08e5f6fd1012-kube-api-access-hcg4h\") pod \"storage-provisioner\" (UID: \"e762d10a-80a8-4e4b-8b16-08e5f6fd1012\") " pod="kube-system/storage-provisioner"
	Oct 02 07:59:48 old-k8s-version-356986 kubelet[1356]: W1002 07:59:48.470425    1356 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/crio-e53e517aa8e70e54dc38ed290c714f0fe7a5adfb03bac7963e82bab1d1218bde WatchSource:0}: Error finding container e53e517aa8e70e54dc38ed290c714f0fe7a5adfb03bac7963e82bab1d1218bde: Status 404 returned error can't find the container with id e53e517aa8e70e54dc38ed290c714f0fe7a5adfb03bac7963e82bab1d1218bde
	Oct 02 07:59:49 old-k8s-version-356986 kubelet[1356]: I1002 07:59:49.045477    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.045422319 podCreationTimestamp="2025-10-02 07:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:59:49.032020679 +0000 UTC m=+27.425501356" watchObservedRunningTime="2025-10-02 07:59:49.045422319 +0000 UTC m=+27.438902980"
	Oct 02 07:59:51 old-k8s-version-356986 kubelet[1356]: I1002 07:59:51.140710    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rcxgd" podStartSLOduration=17.140634906 podCreationTimestamp="2025-10-02 07:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:59:49.046198901 +0000 UTC m=+27.439679570" watchObservedRunningTime="2025-10-02 07:59:51.140634906 +0000 UTC m=+29.534115567"
	Oct 02 07:59:51 old-k8s-version-356986 kubelet[1356]: I1002 07:59:51.141820    1356 topology_manager.go:215] "Topology Admit Handler" podUID="511ea254-6098-48d3-9677-8672c1681171" podNamespace="default" podName="busybox"
	Oct 02 07:59:51 old-k8s-version-356986 kubelet[1356]: I1002 07:59:51.200548    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h84r\" (UniqueName: \"kubernetes.io/projected/511ea254-6098-48d3-9677-8672c1681171-kube-api-access-6h84r\") pod \"busybox\" (UID: \"511ea254-6098-48d3-9677-8672c1681171\") " pod="default/busybox"
	
	
	==> storage-provisioner [401d6e192953496e07a7ada74f3f861912c2b8a1b7bbdb60d1458d7da588c1a4] <==
	I1002 07:59:48.548931       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 07:59:48.597654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 07:59:48.597724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 07:59:48.605811       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 07:59:48.605997       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-356986_10a829c4-7bb2-4cc2-b244-7770fb1a65bf!
	I1002 07:59:48.606763       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed2987ef-dd9a-4a01-9087-8248b6747c96", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-356986_10a829c4-7bb2-4cc2-b244-7770fb1a65bf became leader
	I1002 07:59:48.708089       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-356986_10a829c4-7bb2-4cc2-b244-7770fb1a65bf!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-356986 -n old-k8s-version-356986
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-356986 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-356986 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-356986 --alsologtostderr -v=1: exit status 80 (1.843572504s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-356986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 08:01:17.346690  486851 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:01:17.346892  486851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:01:17.346922  486851 out.go:374] Setting ErrFile to fd 2...
	I1002 08:01:17.346940  486851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:01:17.347286  486851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:01:17.347645  486851 out.go:368] Setting JSON to false
	I1002 08:01:17.347699  486851 mustload.go:65] Loading cluster: old-k8s-version-356986
	I1002 08:01:17.348114  486851 config.go:182] Loaded profile config "old-k8s-version-356986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 08:01:17.348763  486851 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:01:17.377390  486851 host.go:66] Checking if "old-k8s-version-356986" exists ...
	I1002 08:01:17.377865  486851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:01:17.438873  486851 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 08:01:17.429028863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:01:17.439685  486851 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-356986 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 08:01:17.443285  486851 out.go:179] * Pausing node old-k8s-version-356986 ... 
	I1002 08:01:17.446306  486851 host.go:66] Checking if "old-k8s-version-356986" exists ...
	I1002 08:01:17.446669  486851 ssh_runner.go:195] Run: systemctl --version
	I1002 08:01:17.446717  486851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:01:17.466676  486851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:01:17.562689  486851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:01:17.576762  486851 pause.go:51] kubelet running: true
	I1002 08:01:17.576838  486851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:01:17.842320  486851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:01:17.842424  486851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:01:17.922912  486851 cri.go:89] found id: "c8e11fae143d5af223ee8cd93022f50e9979e42cab3a78166ca1dc1c9138f36b"
	I1002 08:01:17.922947  486851 cri.go:89] found id: "eca1bbbe0fa0e2cbf83d0a6ec4dfa7da3823783de1873ea5b0b9c60ab6006bca"
	I1002 08:01:17.922953  486851 cri.go:89] found id: "c00032d7e435ee7c15d9510c5e137e5ba35b440362a62a7350120efff8c5da6a"
	I1002 08:01:17.922957  486851 cri.go:89] found id: "f3fbaee89da23074470de0cc3ebaf94c5dbfafef85f926825eb744fa22178c11"
	I1002 08:01:17.922960  486851 cri.go:89] found id: "836f4317c979eef6a650d578749a260f6ed5e3f31c262b3b74c2a01df2ed13aa"
	I1002 08:01:17.922968  486851 cri.go:89] found id: "b7fa366bdeb131010efd7f4bbce1b448a27310eefcbf896ea00434f576624347"
	I1002 08:01:17.922972  486851 cri.go:89] found id: "6dff7f35e35a464a7d11113c050955b61777a001b9cfa9a977dce6c341d60982"
	I1002 08:01:17.922975  486851 cri.go:89] found id: "b30176313b502e961dc11a216d8f484035b3f0c1657ac76eacce6f3e3eb40e68"
	I1002 08:01:17.922978  486851 cri.go:89] found id: "b88d2bd387df7b19f12ce6afdec4d533ff093f693444fa7a3a00b64ce367911e"
	I1002 08:01:17.922984  486851 cri.go:89] found id: "8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a"
	I1002 08:01:17.922990  486851 cri.go:89] found id: "5d1c0fb229e1f4de7c11313a7f39ff0ac8cf227dfac5475133dcc5e3386b24f2"
	I1002 08:01:17.922994  486851 cri.go:89] found id: ""
	I1002 08:01:17.923054  486851 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:01:17.935329  486851 retry.go:31] will retry after 362.390619ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:01:17Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:01:18.298916  486851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:01:18.320311  486851 pause.go:51] kubelet running: false
	I1002 08:01:18.320383  486851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:01:18.502532  486851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:01:18.502618  486851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:01:18.574764  486851 cri.go:89] found id: "c8e11fae143d5af223ee8cd93022f50e9979e42cab3a78166ca1dc1c9138f36b"
	I1002 08:01:18.574789  486851 cri.go:89] found id: "eca1bbbe0fa0e2cbf83d0a6ec4dfa7da3823783de1873ea5b0b9c60ab6006bca"
	I1002 08:01:18.574795  486851 cri.go:89] found id: "c00032d7e435ee7c15d9510c5e137e5ba35b440362a62a7350120efff8c5da6a"
	I1002 08:01:18.574799  486851 cri.go:89] found id: "f3fbaee89da23074470de0cc3ebaf94c5dbfafef85f926825eb744fa22178c11"
	I1002 08:01:18.574802  486851 cri.go:89] found id: "836f4317c979eef6a650d578749a260f6ed5e3f31c262b3b74c2a01df2ed13aa"
	I1002 08:01:18.574806  486851 cri.go:89] found id: "b7fa366bdeb131010efd7f4bbce1b448a27310eefcbf896ea00434f576624347"
	I1002 08:01:18.574808  486851 cri.go:89] found id: "6dff7f35e35a464a7d11113c050955b61777a001b9cfa9a977dce6c341d60982"
	I1002 08:01:18.574811  486851 cri.go:89] found id: "b30176313b502e961dc11a216d8f484035b3f0c1657ac76eacce6f3e3eb40e68"
	I1002 08:01:18.574816  486851 cri.go:89] found id: "b88d2bd387df7b19f12ce6afdec4d533ff093f693444fa7a3a00b64ce367911e"
	I1002 08:01:18.574822  486851 cri.go:89] found id: "8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a"
	I1002 08:01:18.574826  486851 cri.go:89] found id: "5d1c0fb229e1f4de7c11313a7f39ff0ac8cf227dfac5475133dcc5e3386b24f2"
	I1002 08:01:18.574829  486851 cri.go:89] found id: ""
	I1002 08:01:18.574892  486851 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:01:18.586235  486851 retry.go:31] will retry after 253.470452ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:01:18Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:01:18.840803  486851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:01:18.855206  486851 pause.go:51] kubelet running: false
	I1002 08:01:18.855345  486851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:01:19.029516  486851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:01:19.029640  486851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:01:19.102373  486851 cri.go:89] found id: "c8e11fae143d5af223ee8cd93022f50e9979e42cab3a78166ca1dc1c9138f36b"
	I1002 08:01:19.102434  486851 cri.go:89] found id: "eca1bbbe0fa0e2cbf83d0a6ec4dfa7da3823783de1873ea5b0b9c60ab6006bca"
	I1002 08:01:19.102472  486851 cri.go:89] found id: "c00032d7e435ee7c15d9510c5e137e5ba35b440362a62a7350120efff8c5da6a"
	I1002 08:01:19.102491  486851 cri.go:89] found id: "f3fbaee89da23074470de0cc3ebaf94c5dbfafef85f926825eb744fa22178c11"
	I1002 08:01:19.102515  486851 cri.go:89] found id: "836f4317c979eef6a650d578749a260f6ed5e3f31c262b3b74c2a01df2ed13aa"
	I1002 08:01:19.102551  486851 cri.go:89] found id: "b7fa366bdeb131010efd7f4bbce1b448a27310eefcbf896ea00434f576624347"
	I1002 08:01:19.102571  486851 cri.go:89] found id: "6dff7f35e35a464a7d11113c050955b61777a001b9cfa9a977dce6c341d60982"
	I1002 08:01:19.102592  486851 cri.go:89] found id: "b30176313b502e961dc11a216d8f484035b3f0c1657ac76eacce6f3e3eb40e68"
	I1002 08:01:19.102614  486851 cri.go:89] found id: "b88d2bd387df7b19f12ce6afdec4d533ff093f693444fa7a3a00b64ce367911e"
	I1002 08:01:19.102652  486851 cri.go:89] found id: "8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a"
	I1002 08:01:19.102672  486851 cri.go:89] found id: "5d1c0fb229e1f4de7c11313a7f39ff0ac8cf227dfac5475133dcc5e3386b24f2"
	I1002 08:01:19.102693  486851 cri.go:89] found id: ""
	I1002 08:01:19.102769  486851 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:01:19.117345  486851 out.go:203] 
	W1002 08:01:19.120381  486851 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:01:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:01:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 08:01:19.120408  486851 out.go:285] * 
	* 
	W1002 08:01:19.126003  486851 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 08:01:19.131880  486851 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-356986 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-356986
helpers_test.go:243: (dbg) docker inspect old-k8s-version-356986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85",
	        "Created": "2025-10-02T07:58:57.889195486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484758,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:00:17.440741185Z",
	            "FinishedAt": "2025-10-02T08:00:16.60779997Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/hosts",
	        "LogPath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85-json.log",
	        "Name": "/old-k8s-version-356986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85",
	                "LowerDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356986",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356986",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3cc75bdbb0ddaa3d7d545fc5415088a7ced459e4cd0c0f7ae547d0de062ef15",
	            "SandboxKey": "/var/run/docker/netns/e3cc75bdbb0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:55:25:d6:36:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6148cfa20f53b0003f798fe96a07d1b1fb1d274fc1a1b8a6f3f1e34c962a644",
	                    "EndpointID": "e3895e7bdf593e0aefdc325bba199d198bdf0aa134ab4e04aaf5fe7cc8b5cbf6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-356986",
	                        "3e0fd1abc9e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-356986 -n old-k8s-version-356986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-356986 -n old-k8s-version-356986: exit status 2 (338.496366ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-356986 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-356986 logs -n 25: (1.341992247s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-810803 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo containerd config dump                                                                                                                                                                                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo crio config                                                                                                                                                                                                             │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ delete  │ -p cilium-810803                                                                                                                                                                                                                              │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │ 02 Oct 25 07:49 UTC │
	│ start   │ -p force-systemd-env-297062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ force-systemd-flag-275910 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-flag-275910                                                                                                                                                                                                                  │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-env-297062                                                                                                                                                                                                                   │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p cert-options-654417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ cert-options-654417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ -p cert-options-654417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ delete  │ -p cert-options-654417                                                                                                                                                                                                                        │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:59 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │                     │
	│ stop    │ -p old-k8s-version-356986 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-356986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:01 UTC │
	│ image   │ old-k8s-version-356986 image list --format=json                                                                                                                                                                                               │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ pause   │ -p old-k8s-version-356986 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:00:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:00:17.142249  484633 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:00:17.142372  484633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:00:17.142383  484633 out.go:374] Setting ErrFile to fd 2...
	I1002 08:00:17.142388  484633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:00:17.142640  484633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:00:17.143026  484633 out.go:368] Setting JSON to false
	I1002 08:00:17.143992  484633 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9769,"bootTime":1759382249,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:00:17.144063  484633 start.go:140] virtualization:  
	I1002 08:00:17.147197  484633 out.go:179] * [old-k8s-version-356986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:00:17.151177  484633 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:00:17.151227  484633 notify.go:220] Checking for updates...
	I1002 08:00:17.157182  484633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:00:17.160104  484633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:00:17.163195  484633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:00:17.166277  484633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:00:17.169338  484633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:00:17.173195  484633 config.go:182] Loaded profile config "old-k8s-version-356986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 08:00:17.176633  484633 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1002 08:00:17.179490  484633 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:00:17.212740  484633 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:00:17.212874  484633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:00:17.281320  484633 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:00:17.271821189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:00:17.281433  484633 docker.go:318] overlay module found
	I1002 08:00:17.284564  484633 out.go:179] * Using the docker driver based on existing profile
	I1002 08:00:17.287315  484633 start.go:304] selected driver: docker
	I1002 08:00:17.287334  484633 start.go:924] validating driver "docker" against &{Name:old-k8s-version-356986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:00:17.287436  484633 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:00:17.288152  484633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:00:17.346437  484633 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:00:17.336613339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:00:17.346809  484633 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:00:17.346842  484633 cni.go:84] Creating CNI manager for ""
	I1002 08:00:17.346908  484633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:00:17.346957  484633 start.go:348] cluster config:
	{Name:old-k8s-version-356986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:00:17.350380  484633 out.go:179] * Starting "old-k8s-version-356986" primary control-plane node in "old-k8s-version-356986" cluster
	I1002 08:00:17.353369  484633 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:00:17.356232  484633 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:00:17.359144  484633 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 08:00:17.359223  484633 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 08:00:17.359262  484633 cache.go:58] Caching tarball of preloaded images
	I1002 08:00:17.359355  484633 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:00:17.359364  484633 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1002 08:00:17.359479  484633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/config.json ...
	I1002 08:00:17.359706  484633 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:00:17.385104  484633 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:00:17.385130  484633 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:00:17.385159  484633 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:00:17.385184  484633 start.go:360] acquireMachinesLock for old-k8s-version-356986: {Name:mkbbae297721a7ebacae3a5cc68410b50b3203b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:00:17.385252  484633 start.go:364] duration metric: took 44.669µs to acquireMachinesLock for "old-k8s-version-356986"
	I1002 08:00:17.385274  484633 start.go:96] Skipping create...Using existing machine configuration
	I1002 08:00:17.385285  484633 fix.go:54] fixHost starting: 
	I1002 08:00:17.385550  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:17.403357  484633 fix.go:112] recreateIfNeeded on old-k8s-version-356986: state=Stopped err=<nil>
	W1002 08:00:17.403393  484633 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 08:00:17.406641  484633 out.go:252] * Restarting existing docker container for "old-k8s-version-356986" ...
	I1002 08:00:17.406725  484633 cli_runner.go:164] Run: docker start old-k8s-version-356986
	I1002 08:00:17.674419  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:17.700858  484633 kic.go:430] container "old-k8s-version-356986" state is running.
	I1002 08:00:17.701249  484633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356986
	I1002 08:00:17.724694  484633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/config.json ...
	I1002 08:00:17.724922  484633 machine.go:93] provisionDockerMachine start ...
	I1002 08:00:17.724984  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:17.746726  484633 main.go:141] libmachine: Using SSH client type: native
	I1002 08:00:17.747059  484633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1002 08:00:17.747068  484633 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:00:17.748084  484633 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 08:00:20.878908  484633 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-356986
	
	I1002 08:00:20.878935  484633 ubuntu.go:182] provisioning hostname "old-k8s-version-356986"
	I1002 08:00:20.879005  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:20.896888  484633 main.go:141] libmachine: Using SSH client type: native
	I1002 08:00:20.897204  484633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1002 08:00:20.897221  484633 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-356986 && echo "old-k8s-version-356986" | sudo tee /etc/hostname
	I1002 08:00:21.042024  484633 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-356986
	
	I1002 08:00:21.042130  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:21.059454  484633 main.go:141] libmachine: Using SSH client type: native
	I1002 08:00:21.059771  484633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1002 08:00:21.059795  484633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-356986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-356986/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-356986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:00:21.191356  484633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:00:21.191384  484633 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:00:21.191417  484633 ubuntu.go:190] setting up certificates
	I1002 08:00:21.191427  484633 provision.go:84] configureAuth start
	I1002 08:00:21.191494  484633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356986
	I1002 08:00:21.208727  484633 provision.go:143] copyHostCerts
	I1002 08:00:21.208799  484633 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:00:21.208821  484633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:00:21.208898  484633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:00:21.209012  484633 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:00:21.209024  484633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:00:21.209054  484633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:00:21.209120  484633 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:00:21.209127  484633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:00:21.209153  484633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:00:21.209216  484633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-356986 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-356986]
	I1002 08:00:21.806740  484633 provision.go:177] copyRemoteCerts
	I1002 08:00:21.806818  484633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:00:21.806887  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:21.825068  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:21.923203  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:00:21.941976  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 08:00:21.960936  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 08:00:21.979839  484633 provision.go:87] duration metric: took 788.393125ms to configureAuth
	I1002 08:00:21.979910  484633 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:00:21.980128  484633 config.go:182] Loaded profile config "old-k8s-version-356986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 08:00:21.980240  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:21.997960  484633 main.go:141] libmachine: Using SSH client type: native
	I1002 08:00:21.998382  484633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1002 08:00:21.998409  484633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:00:22.302744  484633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:00:22.302767  484633 machine.go:96] duration metric: took 4.577836355s to provisionDockerMachine
	I1002 08:00:22.302778  484633 start.go:293] postStartSetup for "old-k8s-version-356986" (driver="docker")
	I1002 08:00:22.302788  484633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:00:22.302859  484633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:00:22.302898  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:22.327270  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:22.423262  484633 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:00:22.426837  484633 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:00:22.426906  484633 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:00:22.426922  484633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:00:22.426981  484633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:00:22.427071  484633 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:00:22.427215  484633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:00:22.435125  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:00:22.452987  484633 start.go:296] duration metric: took 150.191899ms for postStartSetup
	I1002 08:00:22.453081  484633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:00:22.453122  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:22.470456  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:22.564474  484633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:00:22.569205  484633 fix.go:56] duration metric: took 5.183918099s for fixHost
	I1002 08:00:22.569232  484633 start.go:83] releasing machines lock for "old-k8s-version-356986", held for 5.183970546s
	I1002 08:00:22.569301  484633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356986
	I1002 08:00:22.585823  484633 ssh_runner.go:195] Run: cat /version.json
	I1002 08:00:22.585887  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:22.586172  484633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:00:22.586246  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:22.618102  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:22.618156  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:22.807744  484633 ssh_runner.go:195] Run: systemctl --version
	I1002 08:00:22.814786  484633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:00:22.850046  484633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:00:22.855202  484633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:00:22.855339  484633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:00:22.863314  484633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 08:00:22.863381  484633 start.go:495] detecting cgroup driver to use...
	I1002 08:00:22.863430  484633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:00:22.863500  484633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:00:22.879021  484633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:00:22.892216  484633 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:00:22.892582  484633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:00:22.911607  484633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:00:22.925052  484633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:00:23.041298  484633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:00:23.177871  484633 docker.go:234] disabling docker service ...
	I1002 08:00:23.178000  484633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:00:23.194494  484633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:00:23.208921  484633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:00:23.336430  484633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:00:23.451234  484633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:00:23.465361  484633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:00:23.480512  484633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 08:00:23.480586  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.490063  484633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:00:23.490158  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.500392  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.509514  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.519278  484633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:00:23.528564  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.538077  484633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.547465  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.557154  484633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:00:23.565609  484633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:00:23.573578  484633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:00:23.684857  484633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:00:23.817126  484633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:00:23.817207  484633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:00:23.821533  484633 start.go:563] Will wait 60s for crictl version
	I1002 08:00:23.821600  484633 ssh_runner.go:195] Run: which crictl
	I1002 08:00:23.825192  484633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:00:23.850348  484633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:00:23.850442  484633 ssh_runner.go:195] Run: crio --version
	I1002 08:00:23.882081  484633 ssh_runner.go:195] Run: crio --version
	I1002 08:00:23.913041  484633 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1002 08:00:23.915860  484633 cli_runner.go:164] Run: docker network inspect old-k8s-version-356986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:00:23.932102  484633 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 08:00:23.935902  484633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:00:23.945934  484633 kubeadm.go:883] updating cluster {Name:old-k8s-version-356986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:00:23.946048  484633 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 08:00:23.946111  484633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:00:23.979339  484633 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:00:23.979367  484633 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:00:23.979424  484633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:00:24.014607  484633 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:00:24.014636  484633 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:00:24.014644  484633 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1002 08:00:24.014747  484633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-356986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:00:24.014841  484633 ssh_runner.go:195] Run: crio config
	I1002 08:00:24.092603  484633 cni.go:84] Creating CNI manager for ""
	I1002 08:00:24.092631  484633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:00:24.092648  484633 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:00:24.092672  484633 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-356986 NodeName:old-k8s-version-356986 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:00:24.092814  484633 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-356986"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:00:24.092890  484633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1002 08:00:24.101012  484633 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:00:24.101116  484633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:00:24.109499  484633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 08:00:24.122813  484633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:00:24.136682  484633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1002 08:00:24.150076  484633 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:00:24.153935  484633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:00:24.164382  484633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:00:24.279168  484633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:00:24.298142  484633 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986 for IP: 192.168.76.2
	I1002 08:00:24.298166  484633 certs.go:195] generating shared ca certs ...
	I1002 08:00:24.298183  484633 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:00:24.298352  484633 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:00:24.298401  484633 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:00:24.298414  484633 certs.go:257] generating profile certs ...
	I1002 08:00:24.298505  484633 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.key
	I1002 08:00:24.298557  484633 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/apiserver.key.56ee8b80
	I1002 08:00:24.298597  484633 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/proxy-client.key
	I1002 08:00:24.298717  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:00:24.298761  484633 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:00:24.298774  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:00:24.298800  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:00:24.298826  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:00:24.298853  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:00:24.298898  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:00:24.299514  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:00:24.318277  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:00:24.344413  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:00:24.372411  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:00:24.403238  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 08:00:24.439032  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 08:00:24.471701  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:00:24.494328  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 08:00:24.521824  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:00:24.542796  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:00:24.563453  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:00:24.585290  484633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:00:24.600685  484633 ssh_runner.go:195] Run: openssl version
	I1002 08:00:24.607125  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:00:24.617321  484633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:00:24.621133  484633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:00:24.621201  484633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:00:24.663536  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:00:24.672043  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:00:24.680927  484633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:00:24.684897  484633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:00:24.684973  484633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:00:24.726385  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:00:24.735447  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:00:24.744260  484633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:00:24.748388  484633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:00:24.748476  484633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:00:24.790138  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:00:24.798222  484633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:00:24.802494  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 08:00:24.844528  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 08:00:24.887760  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 08:00:24.930696  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 08:00:24.974122  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 08:00:25.020680  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 08:00:25.106328  484633 kubeadm.go:400] StartCluster: {Name:old-k8s-version-356986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:00:25.106440  484633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:00:25.106619  484633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:00:25.181336  484633 cri.go:89] found id: "b7fa366bdeb131010efd7f4bbce1b448a27310eefcbf896ea00434f576624347"
	I1002 08:00:25.181360  484633 cri.go:89] found id: "6dff7f35e35a464a7d11113c050955b61777a001b9cfa9a977dce6c341d60982"
	I1002 08:00:25.181366  484633 cri.go:89] found id: "b30176313b502e961dc11a216d8f484035b3f0c1657ac76eacce6f3e3eb40e68"
	I1002 08:00:25.181379  484633 cri.go:89] found id: "b88d2bd387df7b19f12ce6afdec4d533ff093f693444fa7a3a00b64ce367911e"
	I1002 08:00:25.181399  484633 cri.go:89] found id: ""
	I1002 08:00:25.181490  484633 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 08:00:25.205460  484633 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:00:25Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:00:25.205565  484633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:00:25.221823  484633 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 08:00:25.221879  484633 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 08:00:25.221996  484633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 08:00:25.232915  484633 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 08:00:25.233620  484633 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-356986" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:00:25.233960  484633 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-356986" cluster setting kubeconfig missing "old-k8s-version-356986" context setting]
	I1002 08:00:25.234501  484633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:00:25.236653  484633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 08:00:25.251767  484633 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 08:00:25.251807  484633 kubeadm.go:601] duration metric: took 29.885508ms to restartPrimaryControlPlane
	I1002 08:00:25.251851  484633 kubeadm.go:402] duration metric: took 145.533723ms to StartCluster
	I1002 08:00:25.251869  484633 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:00:25.251955  484633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:00:25.253022  484633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:00:25.253324  484633 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:00:25.253840  484633 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:00:25.253910  484633 config.go:182] Loaded profile config "old-k8s-version-356986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 08:00:25.253928  484633 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-356986"
	I1002 08:00:25.253951  484633 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-356986"
	W1002 08:00:25.253958  484633 addons.go:247] addon storage-provisioner should already be in state true
	I1002 08:00:25.253975  484633 addons.go:69] Setting dashboard=true in profile "old-k8s-version-356986"
	I1002 08:00:25.253983  484633 host.go:66] Checking if "old-k8s-version-356986" exists ...
	I1002 08:00:25.253986  484633 addons.go:238] Setting addon dashboard=true in "old-k8s-version-356986"
	W1002 08:00:25.253992  484633 addons.go:247] addon dashboard should already be in state true
	I1002 08:00:25.254010  484633 host.go:66] Checking if "old-k8s-version-356986" exists ...
	I1002 08:00:25.254453  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:25.254665  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:25.255060  484633 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-356986"
	I1002 08:00:25.255094  484633 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-356986"
	I1002 08:00:25.255401  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:25.260689  484633 out.go:179] * Verifying Kubernetes components...
	I1002 08:00:25.265451  484633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:00:25.298091  484633 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 08:00:25.301091  484633 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 08:00:25.304142  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 08:00:25.304168  484633 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 08:00:25.304238  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:25.307432  484633 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:00:25.310454  484633 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:00:25.310477  484633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:00:25.310547  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:25.323623  484633 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-356986"
	W1002 08:00:25.323648  484633 addons.go:247] addon default-storageclass should already be in state true
	I1002 08:00:25.323672  484633 host.go:66] Checking if "old-k8s-version-356986" exists ...
	I1002 08:00:25.324076  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:25.382478  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:25.384330  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:25.400790  484633 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:00:25.400819  484633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:00:25.400883  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:25.436779  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:25.645594  484633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:00:25.681698  484633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:00:25.688570  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 08:00:25.688641  484633 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 08:00:25.692513  484633 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-356986" to be "Ready" ...
	I1002 08:00:25.704560  484633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:00:25.760046  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 08:00:25.760125  484633 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 08:00:25.832733  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 08:00:25.832816  484633 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 08:00:25.906683  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 08:00:25.906755  484633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 08:00:25.986770  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 08:00:25.986858  484633 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 08:00:26.033913  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 08:00:26.033995  484633 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 08:00:26.058723  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 08:00:26.058796  484633 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 08:00:26.081846  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 08:00:26.081923  484633 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 08:00:26.109406  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:00:26.109481  484633 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 08:00:26.129593  484633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:00:29.917750  484633 node_ready.go:49] node "old-k8s-version-356986" is "Ready"
	I1002 08:00:29.917778  484633 node_ready.go:38] duration metric: took 4.225186941s for node "old-k8s-version-356986" to be "Ready" ...
	I1002 08:00:29.917792  484633 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:00:29.917851  484633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:00:31.320775  484633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.639043908s)
	I1002 08:00:31.914982  484633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.210337692s)
	I1002 08:00:32.474849  484633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.345164725s)
	I1002 08:00:32.474884  484633 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.557016421s)
	I1002 08:00:32.475057  484633 api_server.go:72] duration metric: took 7.221692172s to wait for apiserver process to appear ...
	I1002 08:00:32.475075  484633 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:00:32.475120  484633 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 08:00:32.478029  484633 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-356986 addons enable metrics-server
	
	I1002 08:00:32.481255  484633 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 08:00:32.484363  484633 addons.go:514] duration metric: took 7.230505931s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 08:00:32.486453  484633 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 08:00:32.487979  484633 api_server.go:141] control plane version: v1.28.0
	I1002 08:00:32.488008  484633 api_server.go:131] duration metric: took 12.926666ms to wait for apiserver health ...
	I1002 08:00:32.488017  484633 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:00:32.492015  484633 system_pods.go:59] 8 kube-system pods found
	I1002 08:00:32.492057  484633 system_pods.go:61] "coredns-5dd5756b68-rcxgd" [c8338f85-9518-4ede-a9a8-5d7d2a31770b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:00:32.492067  484633 system_pods.go:61] "etcd-old-k8s-version-356986" [bad0e706-ed06-4e2a-9a91-82856527678b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:00:32.492074  484633 system_pods.go:61] "kindnet-h7blk" [dd6f4e26-b3d0-4f9d-9a24-82a9be803571] Running
	I1002 08:00:32.492081  484633 system_pods.go:61] "kube-apiserver-old-k8s-version-356986" [a4eb4668-257f-4b8a-81f0-fea9498de0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:00:32.492093  484633 system_pods.go:61] "kube-controller-manager-old-k8s-version-356986" [e9f64851-c35d-476e-a811-48b81ca12eb7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:00:32.492101  484633 system_pods.go:61] "kube-proxy-8ds6v" [59331def-12d1-49a1-9948-c559d336e730] Running
	I1002 08:00:32.492111  484633 system_pods.go:61] "kube-scheduler-old-k8s-version-356986" [354d6bc9-e27c-47a1-b6de-dd7688681e60] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:00:32.492121  484633 system_pods.go:61] "storage-provisioner" [e762d10a-80a8-4e4b-8b16-08e5f6fd1012] Running
	I1002 08:00:32.492129  484633 system_pods.go:74] duration metric: took 4.106113ms to wait for pod list to return data ...
	I1002 08:00:32.492137  484633 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:00:32.494650  484633 default_sa.go:45] found service account: "default"
	I1002 08:00:32.494674  484633 default_sa.go:55] duration metric: took 2.527383ms for default service account to be created ...
	I1002 08:00:32.494684  484633 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:00:32.498192  484633 system_pods.go:86] 8 kube-system pods found
	I1002 08:00:32.498224  484633 system_pods.go:89] "coredns-5dd5756b68-rcxgd" [c8338f85-9518-4ede-a9a8-5d7d2a31770b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:00:32.498234  484633 system_pods.go:89] "etcd-old-k8s-version-356986" [bad0e706-ed06-4e2a-9a91-82856527678b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:00:32.498263  484633 system_pods.go:89] "kindnet-h7blk" [dd6f4e26-b3d0-4f9d-9a24-82a9be803571] Running
	I1002 08:00:32.498288  484633 system_pods.go:89] "kube-apiserver-old-k8s-version-356986" [a4eb4668-257f-4b8a-81f0-fea9498de0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:00:32.498312  484633 system_pods.go:89] "kube-controller-manager-old-k8s-version-356986" [e9f64851-c35d-476e-a811-48b81ca12eb7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:00:32.498342  484633 system_pods.go:89] "kube-proxy-8ds6v" [59331def-12d1-49a1-9948-c559d336e730] Running
	I1002 08:00:32.498349  484633 system_pods.go:89] "kube-scheduler-old-k8s-version-356986" [354d6bc9-e27c-47a1-b6de-dd7688681e60] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:00:32.498354  484633 system_pods.go:89] "storage-provisioner" [e762d10a-80a8-4e4b-8b16-08e5f6fd1012] Running
	I1002 08:00:32.498369  484633 system_pods.go:126] duration metric: took 3.679287ms to wait for k8s-apps to be running ...
	I1002 08:00:32.498378  484633 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:00:32.498459  484633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:00:32.512111  484633 system_svc.go:56] duration metric: took 13.723999ms WaitForService to wait for kubelet
	I1002 08:00:32.512178  484633 kubeadm.go:586] duration metric: took 7.258811995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:00:32.512216  484633 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:00:32.516055  484633 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:00:32.516132  484633 node_conditions.go:123] node cpu capacity is 2
	I1002 08:00:32.516161  484633 node_conditions.go:105] duration metric: took 3.924746ms to run NodePressure ...
	I1002 08:00:32.516187  484633 start.go:241] waiting for startup goroutines ...
	I1002 08:00:32.516209  484633 start.go:246] waiting for cluster config update ...
	I1002 08:00:32.516235  484633 start.go:255] writing updated cluster config ...
	I1002 08:00:32.516546  484633 ssh_runner.go:195] Run: rm -f paused
	I1002 08:00:32.520533  484633 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:00:32.530585  484633 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-rcxgd" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 08:00:34.539286  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:37.038141  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:39.537244  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:42.037051  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:44.037585  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:46.038131  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:48.537045  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:50.538673  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:53.036528  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:55.040365  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:57.536104  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:59.537441  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:01:02.037954  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	I1002 08:01:04.037515  484633 pod_ready.go:94] pod "coredns-5dd5756b68-rcxgd" is "Ready"
	I1002 08:01:04.037545  484633 pod_ready.go:86] duration metric: took 31.506933164s for pod "coredns-5dd5756b68-rcxgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.040896  484633 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.046428  484633 pod_ready.go:94] pod "etcd-old-k8s-version-356986" is "Ready"
	I1002 08:01:04.046453  484633 pod_ready.go:86] duration metric: took 5.526147ms for pod "etcd-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.050059  484633 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.055953  484633 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-356986" is "Ready"
	I1002 08:01:04.055985  484633 pod_ready.go:86] duration metric: took 5.895414ms for pod "kube-apiserver-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.059489  484633 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.235466  484633 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-356986" is "Ready"
	I1002 08:01:04.235502  484633 pod_ready.go:86] duration metric: took 175.982341ms for pod "kube-controller-manager-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.435443  484633 pod_ready.go:83] waiting for pod "kube-proxy-8ds6v" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.835076  484633 pod_ready.go:94] pod "kube-proxy-8ds6v" is "Ready"
	I1002 08:01:04.835162  484633 pod_ready.go:86] duration metric: took 399.693988ms for pod "kube-proxy-8ds6v" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:05.035229  484633 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:05.434315  484633 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-356986" is "Ready"
	I1002 08:01:05.434346  484633 pod_ready.go:86] duration metric: took 399.087838ms for pod "kube-scheduler-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:05.434359  484633 pod_ready.go:40] duration metric: took 32.913759246s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:01:05.497112  484633 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1002 08:01:05.502520  484633 out.go:203] 
	W1002 08:01:05.505679  484633 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1002 08:01:05.508569  484633 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1002 08:01:05.511768  484633 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-356986" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.485240797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.497963219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.498783797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.516906031Z" level=info msg="Created container 8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84/dashboard-metrics-scraper" id=1f96b1cc-3648-4504-87fc-ae96b3b1a9a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.523831969Z" level=info msg="Starting container: 8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a" id=fb54ffc2-35a7-41ba-856f-3c8cd7696af5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.526175679Z" level=info msg="Started container" PID=1644 containerID=8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84/dashboard-metrics-scraper id=fb54ffc2-35a7-41ba-856f-3c8cd7696af5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc7b4deef45e2743aaaf404f8d6c85d8321d14978e84da195a3645af86365bc1
	Oct 02 08:01:05 old-k8s-version-356986 conmon[1642]: conmon 8437b32d980c2a539b85 <ninfo>: container 1644 exited with status 1
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.686520961Z" level=info msg="Removing container: 72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb" id=8eaa11e4-6180-4bc3-b9f1-9c13a38e060a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.696549533Z" level=info msg="Error loading conmon cgroup of container 72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb: cgroup deleted" id=8eaa11e4-6180-4bc3-b9f1-9c13a38e060a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.700465836Z" level=info msg="Removed container 72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84/dashboard-metrics-scraper" id=8eaa11e4-6180-4bc3-b9f1-9c13a38e060a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.501361696Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.505663824Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.505702142Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.505722081Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.509069475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.509102789Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.509123736Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.51242684Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.512460473Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.512483406Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.516109384Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.516162242Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.516198731Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.520349489Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.52039488Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8437b32d980c2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   cc7b4deef45e2       dashboard-metrics-scraper-5f989dc9cf-srr84       kubernetes-dashboard
	c8e11fae143d5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   000fe98d7ab04       storage-provisioner                              kube-system
	5d1c0fb229e1f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   28 seconds ago      Running             kubernetes-dashboard        0                   4ed7e9d57441f       kubernetes-dashboard-8694d4445c-45gx5            kubernetes-dashboard
	feb1bd1c279a6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   889da50654c89       busybox                                          default
	eca1bbbe0fa0e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           49 seconds ago      Running             coredns                     1                   60c23d3b87fdd       coredns-5dd5756b68-rcxgd                         kube-system
	c00032d7e435e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           49 seconds ago      Running             kube-proxy                  1                   d08422367d459       kube-proxy-8ds6v                                 kube-system
	f3fbaee89da23       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   3da3ce91a6b59       kindnet-h7blk                                    kube-system
	836f4317c979e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   000fe98d7ab04       storage-provisioner                              kube-system
	b7fa366bdeb13       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           55 seconds ago      Running             kube-apiserver              1                   ac326571f2e99       kube-apiserver-old-k8s-version-356986            kube-system
	6dff7f35e35a4       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           55 seconds ago      Running             kube-scheduler              1                   eaf5bfbbfcd3f       kube-scheduler-old-k8s-version-356986            kube-system
	b30176313b502       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           55 seconds ago      Running             etcd                        1                   3632c2b19ce62       etcd-old-k8s-version-356986                      kube-system
	b88d2bd387df7       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           55 seconds ago      Running             kube-controller-manager     1                   0e1ac81fc28af       kube-controller-manager-old-k8s-version-356986   kube-system
	
	
	==> coredns [eca1bbbe0fa0e2cbf83d0a6ec4dfa7da3823783de1873ea5b0b9c60ab6006bca] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58188 - 10865 "HINFO IN 6652346976180594339.3661637428453201384. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02301361s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-356986
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-356986
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=old-k8s-version-356986
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:59:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-356986
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:01:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:01:00 +0000   Thu, 02 Oct 2025 07:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:01:00 +0000   Thu, 02 Oct 2025 07:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:01:00 +0000   Thu, 02 Oct 2025 07:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:01:00 +0000   Thu, 02 Oct 2025 07:59:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-356986
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 76937de9eada46c08a09b682a889c05f
	  System UUID:                35f9767f-9ab2-47f0-8d89-175f1127470c
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-rcxgd                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-old-k8s-version-356986                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-h7blk                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-356986             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-356986    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-8ds6v                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-356986             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-srr84        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-45gx5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-356986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-356986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           108s                 node-controller  Node old-k8s-version-356986 event: Registered Node old-k8s-version-356986 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-356986 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node old-k8s-version-356986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-356986 event: Registered Node old-k8s-version-356986 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:30] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:31] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b30176313b502e961dc11a216d8f484035b3f0c1657ac76eacce6f3e3eb40e68] <==
	{"level":"info","ts":"2025-10-02T08:00:25.368328Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T08:00:25.368363Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T08:00:25.370386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-02T08:00:25.370775Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-02T08:00:25.380851Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T08:00:25.380976Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T08:00:25.406832Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T08:00:25.418447Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T08:00:25.421754Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T08:00:25.418974Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T08:00:25.41901Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T08:00:26.683141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T08:00:26.683251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T08:00:26.683303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-02T08:00:26.683356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T08:00:26.683395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T08:00:26.683437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-02T08:00:26.683469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T08:00:26.685065Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-356986 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T08:00:26.685141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T08:00:26.686121Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-02T08:00:26.691331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T08:00:26.692346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-02T08:00:26.69295Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T08:00:26.693007Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 08:01:20 up  2:43,  0 user,  load average: 1.53, 1.31, 1.54
	Linux old-k8s-version-356986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3fbaee89da23074470de0cc3ebaf94c5dbfafef85f926825eb744fa22178c11] <==
	I1002 08:00:31.305917       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:00:31.320657       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 08:00:31.320804       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:00:31.320817       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:00:31.320833       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:00:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:00:31.500990       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:00:31.501006       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:00:31.501014       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:00:31.501313       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:01:01.501211       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 08:01:01.501218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 08:01:01.501334       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 08:01:01.502662       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1002 08:01:02.701458       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:01:02.701492       1 metrics.go:72] Registering metrics
	I1002 08:01:02.701558       1 controller.go:711] "Syncing nftables rules"
	I1002 08:01:11.500969       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:01:11.501035       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b7fa366bdeb131010efd7f4bbce1b448a27310eefcbf896ea00434f576624347] <==
	I1002 08:00:29.935925       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 08:00:29.974126       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:00:30.012152       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 08:00:30.012185       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 08:00:30.012322       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 08:00:30.060067       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 08:00:30.060163       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 08:00:30.061927       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 08:00:30.062852       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 08:00:30.063018       1 aggregator.go:166] initial CRD sync complete...
	I1002 08:00:30.063033       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 08:00:30.063040       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 08:00:30.063048       1 cache.go:39] Caches are synced for autoregister controller
	E1002 08:00:30.155449       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 08:00:30.703901       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:00:32.289596       1 controller.go:624] quota admission added evaluator for: namespaces
	I1002 08:00:32.337613       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 08:00:32.363804       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:00:32.376416       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:00:32.388381       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 08:00:32.447640       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.248.165"}
	I1002 08:00:32.466492       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.125.73"}
	I1002 08:00:42.952759       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 08:00:42.956348       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 08:00:43.083774       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b88d2bd387df7b19f12ce6afdec4d533ff093f693444fa7a3a00b64ce367911e] <==
	I1002 08:00:43.030408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.126µs"
	I1002 08:00:43.041494       1 shared_informer.go:318] Caches are synced for daemon sets
	I1002 08:00:43.044040       1 shared_informer.go:318] Caches are synced for persistent volume
	I1002 08:00:43.045615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.243755ms"
	I1002 08:00:43.047578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.61µs"
	I1002 08:00:43.049589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.815µs"
	I1002 08:00:43.052165       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 08:00:43.057788       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1002 08:00:43.074522       1 shared_informer.go:318] Caches are synced for cronjob
	I1002 08:00:43.080546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.12µs"
	I1002 08:00:43.080992       1 shared_informer.go:318] Caches are synced for job
	I1002 08:00:43.105743       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 08:00:43.107771       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1002 08:00:43.481761       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 08:00:43.495603       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 08:00:43.495653       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 08:00:48.633077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.94µs"
	I1002 08:00:49.642328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.842µs"
	I1002 08:00:52.659616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.538608ms"
	I1002 08:00:52.660287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.784µs"
	I1002 08:00:53.325546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.388µs"
	I1002 08:01:03.780457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.92117ms"
	I1002 08:01:03.781311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.929µs"
	I1002 08:01:05.696426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.055µs"
	I1002 08:01:13.329665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.256µs"
	
	
	==> kube-proxy [c00032d7e435ee7c15d9510c5e137e5ba35b440362a62a7350120efff8c5da6a] <==
	I1002 08:00:31.526123       1 server_others.go:69] "Using iptables proxy"
	I1002 08:00:31.572610       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1002 08:00:31.776079       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:00:31.777975       1 server_others.go:152] "Using iptables Proxier"
	I1002 08:00:31.778008       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 08:00:31.778014       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 08:00:31.778049       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 08:00:31.778274       1 server.go:846] "Version info" version="v1.28.0"
	I1002 08:00:31.778286       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:00:31.783922       1 config.go:188] "Starting service config controller"
	I1002 08:00:31.783950       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 08:00:31.783968       1 config.go:97] "Starting endpoint slice config controller"
	I1002 08:00:31.783973       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 08:00:31.784789       1 config.go:315] "Starting node config controller"
	I1002 08:00:31.784804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 08:00:31.884081       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 08:00:31.884155       1 shared_informer.go:318] Caches are synced for service config
	I1002 08:00:31.885597       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6dff7f35e35a464a7d11113c050955b61777a001b9cfa9a977dce6c341d60982] <==
	I1002 08:00:28.732095       1 serving.go:348] Generated self-signed cert in-memory
	I1002 08:00:31.310843       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1002 08:00:31.310870       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:00:31.334532       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 08:00:31.334560       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 08:00:31.334613       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:00:31.334621       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 08:00:31.334632       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:00:31.334638       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 08:00:31.336297       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 08:00:31.336345       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 08:00:31.436002       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 08:00:31.436070       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 08:00:31.443354       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.009335     774 topology_manager.go:215] "Topology Admit Handler" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-srr84"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.019562     774 topology_manager.go:215] "Topology Admit Handler" podUID="b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-45gx5"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.020456     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6c80228d-9d1c-4fce-8dd7-201ddba480bc-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-srr84\" (UID: \"6c80228d-9d1c-4fce-8dd7-201ddba480bc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.020651     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf87s\" (UniqueName: \"kubernetes.io/projected/6c80228d-9d1c-4fce-8dd7-201ddba480bc-kube-api-access-xf87s\") pod \"dashboard-metrics-scraper-5f989dc9cf-srr84\" (UID: \"6c80228d-9d1c-4fce-8dd7-201ddba480bc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.121349     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c5cm\" (UniqueName: \"kubernetes.io/projected/b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1-kube-api-access-7c5cm\") pod \"kubernetes-dashboard-8694d4445c-45gx5\" (UID: \"b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-45gx5"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.121428     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-45gx5\" (UID: \"b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-45gx5"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: W1002 08:00:43.353373     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/crio-4ed7e9d57441f14eb6c6f0d67e2f1121142165d79a8000f3f899cc4471652f89 WatchSource:0}: Error finding container 4ed7e9d57441f14eb6c6f0d67e2f1121142165d79a8000f3f899cc4471652f89: Status 404 returned error can't find the container with id 4ed7e9d57441f14eb6c6f0d67e2f1121142165d79a8000f3f899cc4471652f89
	Oct 02 08:00:48 old-k8s-version-356986 kubelet[774]: I1002 08:00:48.619601     774 scope.go:117] "RemoveContainer" containerID="6b1a996922bb6a4697314c5721091fe9aaed52d2250d08d73dbf1abd1ee443b7"
	Oct 02 08:00:49 old-k8s-version-356986 kubelet[774]: I1002 08:00:49.623685     774 scope.go:117] "RemoveContainer" containerID="6b1a996922bb6a4697314c5721091fe9aaed52d2250d08d73dbf1abd1ee443b7"
	Oct 02 08:00:49 old-k8s-version-356986 kubelet[774]: I1002 08:00:49.631186     774 scope.go:117] "RemoveContainer" containerID="72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb"
	Oct 02 08:00:49 old-k8s-version-356986 kubelet[774]: E1002 08:00:49.631526     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-srr84_kubernetes-dashboard(6c80228d-9d1c-4fce-8dd7-201ddba480bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc"
	Oct 02 08:00:53 old-k8s-version-356986 kubelet[774]: I1002 08:00:53.310912     774 scope.go:117] "RemoveContainer" containerID="72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb"
	Oct 02 08:00:53 old-k8s-version-356986 kubelet[774]: E1002 08:00:53.311745     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-srr84_kubernetes-dashboard(6c80228d-9d1c-4fce-8dd7-201ddba480bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc"
	Oct 02 08:00:53 old-k8s-version-356986 kubelet[774]: I1002 08:00:53.325767     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-45gx5" podStartSLOduration=2.743990626 podCreationTimestamp="2025-10-02 08:00:42 +0000 UTC" firstStartedPulling="2025-10-02 08:00:43.357689709 +0000 UTC m=+19.055989252" lastFinishedPulling="2025-10-02 08:00:51.939381338 +0000 UTC m=+27.637680881" observedRunningTime="2025-10-02 08:00:52.647558236 +0000 UTC m=+28.345857778" watchObservedRunningTime="2025-10-02 08:00:53.325682255 +0000 UTC m=+29.023981797"
	Oct 02 08:01:01 old-k8s-version-356986 kubelet[774]: I1002 08:01:01.657929     774 scope.go:117] "RemoveContainer" containerID="836f4317c979eef6a650d578749a260f6ed5e3f31c262b3b74c2a01df2ed13aa"
	Oct 02 08:01:05 old-k8s-version-356986 kubelet[774]: I1002 08:01:05.481749     774 scope.go:117] "RemoveContainer" containerID="72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb"
	Oct 02 08:01:05 old-k8s-version-356986 kubelet[774]: I1002 08:01:05.672156     774 scope.go:117] "RemoveContainer" containerID="72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb"
	Oct 02 08:01:05 old-k8s-version-356986 kubelet[774]: I1002 08:01:05.676028     774 scope.go:117] "RemoveContainer" containerID="8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a"
	Oct 02 08:01:05 old-k8s-version-356986 kubelet[774]: E1002 08:01:05.676465     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-srr84_kubernetes-dashboard(6c80228d-9d1c-4fce-8dd7-201ddba480bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc"
	Oct 02 08:01:13 old-k8s-version-356986 kubelet[774]: I1002 08:01:13.310616     774 scope.go:117] "RemoveContainer" containerID="8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a"
	Oct 02 08:01:13 old-k8s-version-356986 kubelet[774]: E1002 08:01:13.311693     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-srr84_kubernetes-dashboard(6c80228d-9d1c-4fce-8dd7-201ddba480bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc"
	Oct 02 08:01:17 old-k8s-version-356986 kubelet[774]: I1002 08:01:17.780053     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 02 08:01:17 old-k8s-version-356986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:01:17 old-k8s-version-356986 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:01:17 old-k8s-version-356986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5d1c0fb229e1f4de7c11313a7f39ff0ac8cf227dfac5475133dcc5e3386b24f2] <==
	2025/10/02 08:00:51 Using namespace: kubernetes-dashboard
	2025/10/02 08:00:51 Using in-cluster config to connect to apiserver
	2025/10/02 08:00:51 Using secret token for csrf signing
	2025/10/02 08:00:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 08:00:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 08:00:52 Successful initial request to the apiserver, version: v1.28.0
	2025/10/02 08:00:52 Generating JWE encryption key
	2025/10/02 08:00:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 08:00:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 08:00:52 Initializing JWE encryption key from synchronized object
	2025/10/02 08:00:52 Creating in-cluster Sidecar client
	2025/10/02 08:00:52 Serving insecurely on HTTP port: 9090
	2025/10/02 08:00:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:00:51 Starting overwatch
	
	
	==> storage-provisioner [836f4317c979eef6a650d578749a260f6ed5e3f31c262b3b74c2a01df2ed13aa] <==
	I1002 08:00:31.202806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 08:01:01.204731       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c8e11fae143d5af223ee8cd93022f50e9979e42cab3a78166ca1dc1c9138f36b] <==
	I1002 08:01:01.714534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 08:01:01.773805       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:01:01.774000       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 08:01:19.172151       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:01:19.172333       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-356986_2643492d-532f-4d73-b187-a74880d73580!
	I1002 08:01:19.173169       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed2987ef-dd9a-4a01-9087-8248b6747c96", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-356986_2643492d-532f-4d73-b187-a74880d73580 became leader
	I1002 08:01:19.272794       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-356986_2643492d-532f-4d73-b187-a74880d73580!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-356986 -n old-k8s-version-356986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-356986 -n old-k8s-version-356986: exit status 2 (362.28705ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-356986 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-356986
helpers_test.go:243: (dbg) docker inspect old-k8s-version-356986:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85",
	        "Created": "2025-10-02T07:58:57.889195486Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484758,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:00:17.440741185Z",
	            "FinishedAt": "2025-10-02T08:00:16.60779997Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/hosts",
	        "LogPath": "/var/lib/docker/containers/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85-json.log",
	        "Name": "/old-k8s-version-356986",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-356986:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-356986",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85",
	                "LowerDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c3b3bba6f66fa03557331843b3a41aae7c62de28d54a4747da93c2d11a0b8e7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-356986",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-356986/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-356986",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-356986",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-356986",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3cc75bdbb0ddaa3d7d545fc5415088a7ced459e4cd0c0f7ae547d0de062ef15",
	            "SandboxKey": "/var/run/docker/netns/e3cc75bdbb0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-356986": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:55:25:d6:36:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6148cfa20f53b0003f798fe96a07d1b1fb1d274fc1a1b8a6f3f1e34c962a644",
	                    "EndpointID": "e3895e7bdf593e0aefdc325bba199d198bdf0aa134ab4e04aaf5fe7cc8b5cbf6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-356986",
	                        "3e0fd1abc9e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-356986 -n old-k8s-version-356986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-356986 -n old-k8s-version-356986: exit status 2 (337.53063ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-356986 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-356986 logs -n 25: (1.28079258s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-810803 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo containerd config dump                                                                                                                                                                                                  │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ -p cilium-810803 sudo crio config                                                                                                                                                                                                             │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ delete  │ -p cilium-810803                                                                                                                                                                                                                              │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │ 02 Oct 25 07:49 UTC │
	│ start   │ -p force-systemd-env-297062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ force-systemd-flag-275910 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-flag-275910                                                                                                                                                                                                                  │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-env-297062                                                                                                                                                                                                                   │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p cert-options-654417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ cert-options-654417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ -p cert-options-654417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ delete  │ -p cert-options-654417                                                                                                                                                                                                                        │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:59 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │                     │
	│ stop    │ -p old-k8s-version-356986 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-356986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:01 UTC │
	│ image   │ old-k8s-version-356986 image list --format=json                                                                                                                                                                                               │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ pause   │ -p old-k8s-version-356986 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:00:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:00:17.142249  484633 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:00:17.142372  484633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:00:17.142383  484633 out.go:374] Setting ErrFile to fd 2...
	I1002 08:00:17.142388  484633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:00:17.142640  484633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:00:17.143026  484633 out.go:368] Setting JSON to false
	I1002 08:00:17.143992  484633 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9769,"bootTime":1759382249,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:00:17.144063  484633 start.go:140] virtualization:  
	I1002 08:00:17.147197  484633 out.go:179] * [old-k8s-version-356986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:00:17.151177  484633 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:00:17.151227  484633 notify.go:220] Checking for updates...
	I1002 08:00:17.157182  484633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:00:17.160104  484633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:00:17.163195  484633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:00:17.166277  484633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:00:17.169338  484633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:00:17.173195  484633 config.go:182] Loaded profile config "old-k8s-version-356986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 08:00:17.176633  484633 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1002 08:00:17.179490  484633 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:00:17.212740  484633 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:00:17.212874  484633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:00:17.281320  484633 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:00:17.271821189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:00:17.281433  484633 docker.go:318] overlay module found
	I1002 08:00:17.284564  484633 out.go:179] * Using the docker driver based on existing profile
	I1002 08:00:17.287315  484633 start.go:304] selected driver: docker
	I1002 08:00:17.287334  484633 start.go:924] validating driver "docker" against &{Name:old-k8s-version-356986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:00:17.287436  484633 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:00:17.288152  484633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:00:17.346437  484633 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:00:17.336613339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:00:17.346809  484633 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:00:17.346842  484633 cni.go:84] Creating CNI manager for ""
	I1002 08:00:17.346908  484633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:00:17.346957  484633 start.go:348] cluster config:
	{Name:old-k8s-version-356986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:00:17.350380  484633 out.go:179] * Starting "old-k8s-version-356986" primary control-plane node in "old-k8s-version-356986" cluster
	I1002 08:00:17.353369  484633 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:00:17.356232  484633 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:00:17.359144  484633 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 08:00:17.359223  484633 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 08:00:17.359262  484633 cache.go:58] Caching tarball of preloaded images
	I1002 08:00:17.359355  484633 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:00:17.359364  484633 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1002 08:00:17.359479  484633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/config.json ...
	I1002 08:00:17.359706  484633 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:00:17.385104  484633 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:00:17.385130  484633 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:00:17.385159  484633 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:00:17.385184  484633 start.go:360] acquireMachinesLock for old-k8s-version-356986: {Name:mkbbae297721a7ebacae3a5cc68410b50b3203b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:00:17.385252  484633 start.go:364] duration metric: took 44.669µs to acquireMachinesLock for "old-k8s-version-356986"
	I1002 08:00:17.385274  484633 start.go:96] Skipping create...Using existing machine configuration
	I1002 08:00:17.385285  484633 fix.go:54] fixHost starting: 
	I1002 08:00:17.385550  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:17.403357  484633 fix.go:112] recreateIfNeeded on old-k8s-version-356986: state=Stopped err=<nil>
	W1002 08:00:17.403393  484633 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 08:00:17.406641  484633 out.go:252] * Restarting existing docker container for "old-k8s-version-356986" ...
	I1002 08:00:17.406725  484633 cli_runner.go:164] Run: docker start old-k8s-version-356986
	I1002 08:00:17.674419  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:17.700858  484633 kic.go:430] container "old-k8s-version-356986" state is running.
	I1002 08:00:17.701249  484633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356986
	I1002 08:00:17.724694  484633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/config.json ...
	I1002 08:00:17.724922  484633 machine.go:93] provisionDockerMachine start ...
	I1002 08:00:17.724984  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:17.746726  484633 main.go:141] libmachine: Using SSH client type: native
	I1002 08:00:17.747059  484633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1002 08:00:17.747068  484633 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:00:17.748084  484633 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 08:00:20.878908  484633 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-356986
	
	I1002 08:00:20.878935  484633 ubuntu.go:182] provisioning hostname "old-k8s-version-356986"
	I1002 08:00:20.879005  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:20.896888  484633 main.go:141] libmachine: Using SSH client type: native
	I1002 08:00:20.897204  484633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1002 08:00:20.897221  484633 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-356986 && echo "old-k8s-version-356986" | sudo tee /etc/hostname
	I1002 08:00:21.042024  484633 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-356986
	
	I1002 08:00:21.042130  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:21.059454  484633 main.go:141] libmachine: Using SSH client type: native
	I1002 08:00:21.059771  484633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1002 08:00:21.059795  484633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-356986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-356986/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-356986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:00:21.191356  484633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:00:21.191384  484633 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:00:21.191417  484633 ubuntu.go:190] setting up certificates
	I1002 08:00:21.191427  484633 provision.go:84] configureAuth start
	I1002 08:00:21.191494  484633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356986
	I1002 08:00:21.208727  484633 provision.go:143] copyHostCerts
	I1002 08:00:21.208799  484633 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:00:21.208821  484633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:00:21.208898  484633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:00:21.209012  484633 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:00:21.209024  484633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:00:21.209054  484633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:00:21.209120  484633 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:00:21.209127  484633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:00:21.209153  484633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:00:21.209216  484633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-356986 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-356986]
	I1002 08:00:21.806740  484633 provision.go:177] copyRemoteCerts
	I1002 08:00:21.806818  484633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:00:21.806887  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:21.825068  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:21.923203  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:00:21.941976  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 08:00:21.960936  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 08:00:21.979839  484633 provision.go:87] duration metric: took 788.393125ms to configureAuth
	I1002 08:00:21.979910  484633 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:00:21.980128  484633 config.go:182] Loaded profile config "old-k8s-version-356986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 08:00:21.980240  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:21.997960  484633 main.go:141] libmachine: Using SSH client type: native
	I1002 08:00:21.998382  484633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1002 08:00:21.998409  484633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:00:22.302744  484633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:00:22.302767  484633 machine.go:96] duration metric: took 4.577836355s to provisionDockerMachine
	I1002 08:00:22.302778  484633 start.go:293] postStartSetup for "old-k8s-version-356986" (driver="docker")
	I1002 08:00:22.302788  484633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:00:22.302859  484633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:00:22.302898  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:22.327270  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:22.423262  484633 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:00:22.426837  484633 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:00:22.426906  484633 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:00:22.426922  484633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:00:22.426981  484633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:00:22.427071  484633 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:00:22.427215  484633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:00:22.435125  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:00:22.452987  484633 start.go:296] duration metric: took 150.191899ms for postStartSetup
	I1002 08:00:22.453081  484633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:00:22.453122  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:22.470456  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:22.564474  484633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:00:22.569205  484633 fix.go:56] duration metric: took 5.183918099s for fixHost
	I1002 08:00:22.569232  484633 start.go:83] releasing machines lock for "old-k8s-version-356986", held for 5.183970546s
	I1002 08:00:22.569301  484633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-356986
	I1002 08:00:22.585823  484633 ssh_runner.go:195] Run: cat /version.json
	I1002 08:00:22.585887  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:22.586172  484633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:00:22.586246  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:22.618102  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:22.618156  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:22.807744  484633 ssh_runner.go:195] Run: systemctl --version
	I1002 08:00:22.814786  484633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:00:22.850046  484633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:00:22.855202  484633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:00:22.855339  484633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:00:22.863314  484633 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 08:00:22.863381  484633 start.go:495] detecting cgroup driver to use...
	I1002 08:00:22.863430  484633 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:00:22.863500  484633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:00:22.879021  484633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:00:22.892216  484633 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:00:22.892582  484633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:00:22.911607  484633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:00:22.925052  484633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:00:23.041298  484633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:00:23.177871  484633 docker.go:234] disabling docker service ...
	I1002 08:00:23.178000  484633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:00:23.194494  484633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:00:23.208921  484633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:00:23.336430  484633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:00:23.451234  484633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:00:23.465361  484633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:00:23.480512  484633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 08:00:23.480586  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.490063  484633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:00:23.490158  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.500392  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.509514  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.519278  484633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:00:23.528564  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.538077  484633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.547465  484633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:00:23.557154  484633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:00:23.565609  484633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:00:23.573578  484633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:00:23.684857  484633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:00:23.817126  484633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:00:23.817207  484633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:00:23.821533  484633 start.go:563] Will wait 60s for crictl version
	I1002 08:00:23.821600  484633 ssh_runner.go:195] Run: which crictl
	I1002 08:00:23.825192  484633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:00:23.850348  484633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:00:23.850442  484633 ssh_runner.go:195] Run: crio --version
	I1002 08:00:23.882081  484633 ssh_runner.go:195] Run: crio --version
	I1002 08:00:23.913041  484633 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1002 08:00:23.915860  484633 cli_runner.go:164] Run: docker network inspect old-k8s-version-356986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:00:23.932102  484633 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 08:00:23.935902  484633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:00:23.945934  484633 kubeadm.go:883] updating cluster {Name:old-k8s-version-356986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:00:23.946048  484633 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 08:00:23.946111  484633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:00:23.979339  484633 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:00:23.979367  484633 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:00:23.979424  484633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:00:24.014607  484633 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:00:24.014636  484633 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:00:24.014644  484633 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1002 08:00:24.014747  484633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-356986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:00:24.014841  484633 ssh_runner.go:195] Run: crio config
	I1002 08:00:24.092603  484633 cni.go:84] Creating CNI manager for ""
	I1002 08:00:24.092631  484633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:00:24.092648  484633 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:00:24.092672  484633 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-356986 NodeName:old-k8s-version-356986 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:00:24.092814  484633 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-356986"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:00:24.092890  484633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1002 08:00:24.101012  484633 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:00:24.101116  484633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:00:24.109499  484633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 08:00:24.122813  484633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:00:24.136682  484633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1002 08:00:24.150076  484633 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:00:24.153935  484633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:00:24.164382  484633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:00:24.279168  484633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:00:24.298142  484633 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986 for IP: 192.168.76.2
	I1002 08:00:24.298166  484633 certs.go:195] generating shared ca certs ...
	I1002 08:00:24.298183  484633 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:00:24.298352  484633 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:00:24.298401  484633 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:00:24.298414  484633 certs.go:257] generating profile certs ...
	I1002 08:00:24.298505  484633 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.key
	I1002 08:00:24.298557  484633 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/apiserver.key.56ee8b80
	I1002 08:00:24.298597  484633 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/proxy-client.key
	I1002 08:00:24.298717  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:00:24.298761  484633 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:00:24.298774  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:00:24.298800  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:00:24.298826  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:00:24.298853  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:00:24.298898  484633 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:00:24.299514  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:00:24.318277  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:00:24.344413  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:00:24.372411  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:00:24.403238  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 08:00:24.439032  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 08:00:24.471701  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:00:24.494328  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 08:00:24.521824  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:00:24.542796  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:00:24.563453  484633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:00:24.585290  484633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:00:24.600685  484633 ssh_runner.go:195] Run: openssl version
	I1002 08:00:24.607125  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:00:24.617321  484633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:00:24.621133  484633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:00:24.621201  484633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:00:24.663536  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:00:24.672043  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:00:24.680927  484633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:00:24.684897  484633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:00:24.684973  484633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:00:24.726385  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:00:24.735447  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:00:24.744260  484633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:00:24.748388  484633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:00:24.748476  484633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:00:24.790138  484633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:00:24.798222  484633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:00:24.802494  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 08:00:24.844528  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 08:00:24.887760  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 08:00:24.930696  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 08:00:24.974122  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 08:00:25.020680  484633 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 08:00:25.106328  484633 kubeadm.go:400] StartCluster: {Name:old-k8s-version-356986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-356986 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:00:25.106440  484633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:00:25.106619  484633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:00:25.181336  484633 cri.go:89] found id: "b7fa366bdeb131010efd7f4bbce1b448a27310eefcbf896ea00434f576624347"
	I1002 08:00:25.181360  484633 cri.go:89] found id: "6dff7f35e35a464a7d11113c050955b61777a001b9cfa9a977dce6c341d60982"
	I1002 08:00:25.181366  484633 cri.go:89] found id: "b30176313b502e961dc11a216d8f484035b3f0c1657ac76eacce6f3e3eb40e68"
	I1002 08:00:25.181379  484633 cri.go:89] found id: "b88d2bd387df7b19f12ce6afdec4d533ff093f693444fa7a3a00b64ce367911e"
	I1002 08:00:25.181399  484633 cri.go:89] found id: ""
	I1002 08:00:25.181490  484633 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 08:00:25.205460  484633 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:00:25Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:00:25.205565  484633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:00:25.221823  484633 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 08:00:25.221879  484633 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 08:00:25.221996  484633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 08:00:25.232915  484633 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 08:00:25.233620  484633 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-356986" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:00:25.233960  484633 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-356986" cluster setting kubeconfig missing "old-k8s-version-356986" context setting]
	I1002 08:00:25.234501  484633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:00:25.236653  484633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 08:00:25.251767  484633 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 08:00:25.251807  484633 kubeadm.go:601] duration metric: took 29.885508ms to restartPrimaryControlPlane
	I1002 08:00:25.251851  484633 kubeadm.go:402] duration metric: took 145.533723ms to StartCluster
	I1002 08:00:25.251869  484633 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:00:25.251955  484633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:00:25.253022  484633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:00:25.253324  484633 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:00:25.253840  484633 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:00:25.253910  484633 config.go:182] Loaded profile config "old-k8s-version-356986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 08:00:25.253928  484633 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-356986"
	I1002 08:00:25.253951  484633 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-356986"
	W1002 08:00:25.253958  484633 addons.go:247] addon storage-provisioner should already be in state true
	I1002 08:00:25.253975  484633 addons.go:69] Setting dashboard=true in profile "old-k8s-version-356986"
	I1002 08:00:25.253983  484633 host.go:66] Checking if "old-k8s-version-356986" exists ...
	I1002 08:00:25.253986  484633 addons.go:238] Setting addon dashboard=true in "old-k8s-version-356986"
	W1002 08:00:25.253992  484633 addons.go:247] addon dashboard should already be in state true
	I1002 08:00:25.254010  484633 host.go:66] Checking if "old-k8s-version-356986" exists ...
	I1002 08:00:25.254453  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:25.254665  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:25.255060  484633 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-356986"
	I1002 08:00:25.255094  484633 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-356986"
	I1002 08:00:25.255401  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:25.260689  484633 out.go:179] * Verifying Kubernetes components...
	I1002 08:00:25.265451  484633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:00:25.298091  484633 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 08:00:25.301091  484633 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 08:00:25.304142  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 08:00:25.304168  484633 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 08:00:25.304238  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:25.307432  484633 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:00:25.310454  484633 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:00:25.310477  484633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:00:25.310547  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:25.323623  484633 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-356986"
	W1002 08:00:25.323648  484633 addons.go:247] addon default-storageclass should already be in state true
	I1002 08:00:25.323672  484633 host.go:66] Checking if "old-k8s-version-356986" exists ...
	I1002 08:00:25.324076  484633 cli_runner.go:164] Run: docker container inspect old-k8s-version-356986 --format={{.State.Status}}
	I1002 08:00:25.382478  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:25.384330  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:25.400790  484633 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:00:25.400819  484633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:00:25.400883  484633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-356986
	I1002 08:00:25.436779  484633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/old-k8s-version-356986/id_rsa Username:docker}
	I1002 08:00:25.645594  484633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:00:25.681698  484633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:00:25.688570  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 08:00:25.688641  484633 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 08:00:25.692513  484633 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-356986" to be "Ready" ...
	I1002 08:00:25.704560  484633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:00:25.760046  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 08:00:25.760125  484633 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 08:00:25.832733  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 08:00:25.832816  484633 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 08:00:25.906683  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 08:00:25.906755  484633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 08:00:25.986770  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 08:00:25.986858  484633 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 08:00:26.033913  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 08:00:26.033995  484633 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 08:00:26.058723  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 08:00:26.058796  484633 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 08:00:26.081846  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 08:00:26.081923  484633 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 08:00:26.109406  484633 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:00:26.109481  484633 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 08:00:26.129593  484633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:00:29.917750  484633 node_ready.go:49] node "old-k8s-version-356986" is "Ready"
	I1002 08:00:29.917778  484633 node_ready.go:38] duration metric: took 4.225186941s for node "old-k8s-version-356986" to be "Ready" ...
	I1002 08:00:29.917792  484633 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:00:29.917851  484633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:00:31.320775  484633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.639043908s)
	I1002 08:00:31.914982  484633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.210337692s)
	I1002 08:00:32.474849  484633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.345164725s)
	I1002 08:00:32.474884  484633 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.557016421s)
	I1002 08:00:32.475057  484633 api_server.go:72] duration metric: took 7.221692172s to wait for apiserver process to appear ...
	I1002 08:00:32.475075  484633 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:00:32.475120  484633 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 08:00:32.478029  484633 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-356986 addons enable metrics-server
	
	I1002 08:00:32.481255  484633 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 08:00:32.484363  484633 addons.go:514] duration metric: took 7.230505931s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 08:00:32.486453  484633 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 08:00:32.487979  484633 api_server.go:141] control plane version: v1.28.0
	I1002 08:00:32.488008  484633 api_server.go:131] duration metric: took 12.926666ms to wait for apiserver health ...
	I1002 08:00:32.488017  484633 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:00:32.492015  484633 system_pods.go:59] 8 kube-system pods found
	I1002 08:00:32.492057  484633 system_pods.go:61] "coredns-5dd5756b68-rcxgd" [c8338f85-9518-4ede-a9a8-5d7d2a31770b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:00:32.492067  484633 system_pods.go:61] "etcd-old-k8s-version-356986" [bad0e706-ed06-4e2a-9a91-82856527678b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:00:32.492074  484633 system_pods.go:61] "kindnet-h7blk" [dd6f4e26-b3d0-4f9d-9a24-82a9be803571] Running
	I1002 08:00:32.492081  484633 system_pods.go:61] "kube-apiserver-old-k8s-version-356986" [a4eb4668-257f-4b8a-81f0-fea9498de0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:00:32.492093  484633 system_pods.go:61] "kube-controller-manager-old-k8s-version-356986" [e9f64851-c35d-476e-a811-48b81ca12eb7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:00:32.492101  484633 system_pods.go:61] "kube-proxy-8ds6v" [59331def-12d1-49a1-9948-c559d336e730] Running
	I1002 08:00:32.492111  484633 system_pods.go:61] "kube-scheduler-old-k8s-version-356986" [354d6bc9-e27c-47a1-b6de-dd7688681e60] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:00:32.492121  484633 system_pods.go:61] "storage-provisioner" [e762d10a-80a8-4e4b-8b16-08e5f6fd1012] Running
	I1002 08:00:32.492129  484633 system_pods.go:74] duration metric: took 4.106113ms to wait for pod list to return data ...
	I1002 08:00:32.492137  484633 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:00:32.494650  484633 default_sa.go:45] found service account: "default"
	I1002 08:00:32.494674  484633 default_sa.go:55] duration metric: took 2.527383ms for default service account to be created ...
	I1002 08:00:32.494684  484633 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:00:32.498192  484633 system_pods.go:86] 8 kube-system pods found
	I1002 08:00:32.498224  484633 system_pods.go:89] "coredns-5dd5756b68-rcxgd" [c8338f85-9518-4ede-a9a8-5d7d2a31770b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:00:32.498234  484633 system_pods.go:89] "etcd-old-k8s-version-356986" [bad0e706-ed06-4e2a-9a91-82856527678b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:00:32.498263  484633 system_pods.go:89] "kindnet-h7blk" [dd6f4e26-b3d0-4f9d-9a24-82a9be803571] Running
	I1002 08:00:32.498288  484633 system_pods.go:89] "kube-apiserver-old-k8s-version-356986" [a4eb4668-257f-4b8a-81f0-fea9498de0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:00:32.498312  484633 system_pods.go:89] "kube-controller-manager-old-k8s-version-356986" [e9f64851-c35d-476e-a811-48b81ca12eb7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:00:32.498342  484633 system_pods.go:89] "kube-proxy-8ds6v" [59331def-12d1-49a1-9948-c559d336e730] Running
	I1002 08:00:32.498349  484633 system_pods.go:89] "kube-scheduler-old-k8s-version-356986" [354d6bc9-e27c-47a1-b6de-dd7688681e60] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:00:32.498354  484633 system_pods.go:89] "storage-provisioner" [e762d10a-80a8-4e4b-8b16-08e5f6fd1012] Running
	I1002 08:00:32.498369  484633 system_pods.go:126] duration metric: took 3.679287ms to wait for k8s-apps to be running ...
	I1002 08:00:32.498378  484633 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:00:32.498459  484633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:00:32.512111  484633 system_svc.go:56] duration metric: took 13.723999ms WaitForService to wait for kubelet
	I1002 08:00:32.512178  484633 kubeadm.go:586] duration metric: took 7.258811995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:00:32.512216  484633 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:00:32.516055  484633 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:00:32.516132  484633 node_conditions.go:123] node cpu capacity is 2
	I1002 08:00:32.516161  484633 node_conditions.go:105] duration metric: took 3.924746ms to run NodePressure ...
	I1002 08:00:32.516187  484633 start.go:241] waiting for startup goroutines ...
	I1002 08:00:32.516209  484633 start.go:246] waiting for cluster config update ...
	I1002 08:00:32.516235  484633 start.go:255] writing updated cluster config ...
	I1002 08:00:32.516546  484633 ssh_runner.go:195] Run: rm -f paused
	I1002 08:00:32.520533  484633 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:00:32.530585  484633 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-rcxgd" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 08:00:34.539286  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:37.038141  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:39.537244  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:42.037051  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:44.037585  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:46.038131  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:48.537045  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:50.538673  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:53.036528  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:55.040365  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:57.536104  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:00:59.537441  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	W1002 08:01:02.037954  484633 pod_ready.go:104] pod "coredns-5dd5756b68-rcxgd" is not "Ready", error: <nil>
	I1002 08:01:04.037515  484633 pod_ready.go:94] pod "coredns-5dd5756b68-rcxgd" is "Ready"
	I1002 08:01:04.037545  484633 pod_ready.go:86] duration metric: took 31.506933164s for pod "coredns-5dd5756b68-rcxgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.040896  484633 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.046428  484633 pod_ready.go:94] pod "etcd-old-k8s-version-356986" is "Ready"
	I1002 08:01:04.046453  484633 pod_ready.go:86] duration metric: took 5.526147ms for pod "etcd-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.050059  484633 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.055953  484633 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-356986" is "Ready"
	I1002 08:01:04.055985  484633 pod_ready.go:86] duration metric: took 5.895414ms for pod "kube-apiserver-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.059489  484633 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.235466  484633 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-356986" is "Ready"
	I1002 08:01:04.235502  484633 pod_ready.go:86] duration metric: took 175.982341ms for pod "kube-controller-manager-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.435443  484633 pod_ready.go:83] waiting for pod "kube-proxy-8ds6v" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:04.835076  484633 pod_ready.go:94] pod "kube-proxy-8ds6v" is "Ready"
	I1002 08:01:04.835162  484633 pod_ready.go:86] duration metric: took 399.693988ms for pod "kube-proxy-8ds6v" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:05.035229  484633 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:05.434315  484633 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-356986" is "Ready"
	I1002 08:01:05.434346  484633 pod_ready.go:86] duration metric: took 399.087838ms for pod "kube-scheduler-old-k8s-version-356986" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:01:05.434359  484633 pod_ready.go:40] duration metric: took 32.913759246s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:01:05.497112  484633 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1002 08:01:05.502520  484633 out.go:203] 
	W1002 08:01:05.505679  484633 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1002 08:01:05.508569  484633 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1002 08:01:05.511768  484633 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-356986" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.485240797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.497963219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.498783797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.516906031Z" level=info msg="Created container 8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84/dashboard-metrics-scraper" id=1f96b1cc-3648-4504-87fc-ae96b3b1a9a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.523831969Z" level=info msg="Starting container: 8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a" id=fb54ffc2-35a7-41ba-856f-3c8cd7696af5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.526175679Z" level=info msg="Started container" PID=1644 containerID=8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84/dashboard-metrics-scraper id=fb54ffc2-35a7-41ba-856f-3c8cd7696af5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc7b4deef45e2743aaaf404f8d6c85d8321d14978e84da195a3645af86365bc1
	Oct 02 08:01:05 old-k8s-version-356986 conmon[1642]: conmon 8437b32d980c2a539b85 <ninfo>: container 1644 exited with status 1
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.686520961Z" level=info msg="Removing container: 72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb" id=8eaa11e4-6180-4bc3-b9f1-9c13a38e060a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.696549533Z" level=info msg="Error loading conmon cgroup of container 72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb: cgroup deleted" id=8eaa11e4-6180-4bc3-b9f1-9c13a38e060a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:01:05 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:05.700465836Z" level=info msg="Removed container 72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84/dashboard-metrics-scraper" id=8eaa11e4-6180-4bc3-b9f1-9c13a38e060a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.501361696Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.505663824Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.505702142Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.505722081Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.509069475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.509102789Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.509123736Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.51242684Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.512460473Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.512483406Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.516109384Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.516162242Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.516198731Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.520349489Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:01:11 old-k8s-version-356986 crio[648]: time="2025-10-02T08:01:11.52039488Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8437b32d980c2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   cc7b4deef45e2       dashboard-metrics-scraper-5f989dc9cf-srr84       kubernetes-dashboard
	c8e11fae143d5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   000fe98d7ab04       storage-provisioner                              kube-system
	5d1c0fb229e1f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   4ed7e9d57441f       kubernetes-dashboard-8694d4445c-45gx5            kubernetes-dashboard
	feb1bd1c279a6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   889da50654c89       busybox                                          default
	eca1bbbe0fa0e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   60c23d3b87fdd       coredns-5dd5756b68-rcxgd                         kube-system
	c00032d7e435e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   d08422367d459       kube-proxy-8ds6v                                 kube-system
	f3fbaee89da23       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   3da3ce91a6b59       kindnet-h7blk                                    kube-system
	836f4317c979e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   000fe98d7ab04       storage-provisioner                              kube-system
	b7fa366bdeb13       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   ac326571f2e99       kube-apiserver-old-k8s-version-356986            kube-system
	6dff7f35e35a4       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           57 seconds ago      Running             kube-scheduler              1                   eaf5bfbbfcd3f       kube-scheduler-old-k8s-version-356986            kube-system
	b30176313b502       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           57 seconds ago      Running             etcd                        1                   3632c2b19ce62       etcd-old-k8s-version-356986                      kube-system
	b88d2bd387df7       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           57 seconds ago      Running             kube-controller-manager     1                   0e1ac81fc28af       kube-controller-manager-old-k8s-version-356986   kube-system
	
	
	==> coredns [eca1bbbe0fa0e2cbf83d0a6ec4dfa7da3823783de1873ea5b0b9c60ab6006bca] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58188 - 10865 "HINFO IN 6652346976180594339.3661637428453201384. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02301361s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-356986
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-356986
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=old-k8s-version-356986
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_59_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:59:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-356986
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:01:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:01:00 +0000   Thu, 02 Oct 2025 07:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:01:00 +0000   Thu, 02 Oct 2025 07:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:01:00 +0000   Thu, 02 Oct 2025 07:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:01:00 +0000   Thu, 02 Oct 2025 07:59:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-356986
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 76937de9eada46c08a09b682a889c05f
	  System UUID:                35f9767f-9ab2-47f0-8d89-175f1127470c
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-rcxgd                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-356986                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-h7blk                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-356986             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-356986    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-8ds6v                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-356986             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-srr84        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-45gx5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-356986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-356986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           110s                 node-controller  Node old-k8s-version-356986 event: Registered Node old-k8s-version-356986 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-356986 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)    kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-356986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)    kubelet          Node old-k8s-version-356986 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                  node-controller  Node old-k8s-version-356986 event: Registered Node old-k8s-version-356986 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:25] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:30] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:31] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b30176313b502e961dc11a216d8f484035b3f0c1657ac76eacce6f3e3eb40e68] <==
	{"level":"info","ts":"2025-10-02T08:00:25.368328Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T08:00:25.368363Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T08:00:25.370386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-02T08:00:25.370775Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-02T08:00:25.380851Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T08:00:25.380976Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T08:00:25.406832Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T08:00:25.418447Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T08:00:25.421754Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T08:00:25.418974Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T08:00:25.41901Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T08:00:26.683141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T08:00:26.683251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T08:00:26.683303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-02T08:00:26.683356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T08:00:26.683395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T08:00:26.683437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-02T08:00:26.683469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-02T08:00:26.685065Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-356986 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T08:00:26.685141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T08:00:26.686121Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-02T08:00:26.691331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T08:00:26.692346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-02T08:00:26.69295Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T08:00:26.693007Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 08:01:22 up  2:43,  0 user,  load average: 1.40, 1.29, 1.53
	Linux old-k8s-version-356986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3fbaee89da23074470de0cc3ebaf94c5dbfafef85f926825eb744fa22178c11] <==
	I1002 08:00:31.305917       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:00:31.320657       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 08:00:31.320804       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:00:31.320817       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:00:31.320833       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:00:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:00:31.500990       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:00:31.501006       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:00:31.501014       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:00:31.501313       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:01:01.501211       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 08:01:01.501218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 08:01:01.501334       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 08:01:01.502662       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1002 08:01:02.701458       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:01:02.701492       1 metrics.go:72] Registering metrics
	I1002 08:01:02.701558       1 controller.go:711] "Syncing nftables rules"
	I1002 08:01:11.500969       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:01:11.501035       1 main.go:301] handling current node
	I1002 08:01:21.507165       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:01:21.507200       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b7fa366bdeb131010efd7f4bbce1b448a27310eefcbf896ea00434f576624347] <==
	I1002 08:00:29.935925       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 08:00:29.974126       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:00:30.012152       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 08:00:30.012185       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 08:00:30.012322       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 08:00:30.060067       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 08:00:30.060163       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 08:00:30.061927       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 08:00:30.062852       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1002 08:00:30.063018       1 aggregator.go:166] initial CRD sync complete...
	I1002 08:00:30.063033       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 08:00:30.063040       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 08:00:30.063048       1 cache.go:39] Caches are synced for autoregister controller
	E1002 08:00:30.155449       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 08:00:30.703901       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:00:32.289596       1 controller.go:624] quota admission added evaluator for: namespaces
	I1002 08:00:32.337613       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 08:00:32.363804       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:00:32.376416       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:00:32.388381       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 08:00:32.447640       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.248.165"}
	I1002 08:00:32.466492       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.125.73"}
	I1002 08:00:42.952759       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 08:00:42.956348       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 08:00:43.083774       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b88d2bd387df7b19f12ce6afdec4d533ff093f693444fa7a3a00b64ce367911e] <==
	I1002 08:00:43.030408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.126µs"
	I1002 08:00:43.041494       1 shared_informer.go:318] Caches are synced for daemon sets
	I1002 08:00:43.044040       1 shared_informer.go:318] Caches are synced for persistent volume
	I1002 08:00:43.045615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.243755ms"
	I1002 08:00:43.047578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.61µs"
	I1002 08:00:43.049589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.815µs"
	I1002 08:00:43.052165       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 08:00:43.057788       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1002 08:00:43.074522       1 shared_informer.go:318] Caches are synced for cronjob
	I1002 08:00:43.080546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.12µs"
	I1002 08:00:43.080992       1 shared_informer.go:318] Caches are synced for job
	I1002 08:00:43.105743       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 08:00:43.107771       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1002 08:00:43.481761       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 08:00:43.495603       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 08:00:43.495653       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 08:00:48.633077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.94µs"
	I1002 08:00:49.642328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.842µs"
	I1002 08:00:52.659616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.538608ms"
	I1002 08:00:52.660287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="60.784µs"
	I1002 08:00:53.325546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.388µs"
	I1002 08:01:03.780457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.92117ms"
	I1002 08:01:03.781311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.929µs"
	I1002 08:01:05.696426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.055µs"
	I1002 08:01:13.329665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.256µs"
	
	
	==> kube-proxy [c00032d7e435ee7c15d9510c5e137e5ba35b440362a62a7350120efff8c5da6a] <==
	I1002 08:00:31.526123       1 server_others.go:69] "Using iptables proxy"
	I1002 08:00:31.572610       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1002 08:00:31.776079       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:00:31.777975       1 server_others.go:152] "Using iptables Proxier"
	I1002 08:00:31.778008       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 08:00:31.778014       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 08:00:31.778049       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 08:00:31.778274       1 server.go:846] "Version info" version="v1.28.0"
	I1002 08:00:31.778286       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:00:31.783922       1 config.go:188] "Starting service config controller"
	I1002 08:00:31.783950       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 08:00:31.783968       1 config.go:97] "Starting endpoint slice config controller"
	I1002 08:00:31.783973       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 08:00:31.784789       1 config.go:315] "Starting node config controller"
	I1002 08:00:31.784804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 08:00:31.884081       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 08:00:31.884155       1 shared_informer.go:318] Caches are synced for service config
	I1002 08:00:31.885597       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6dff7f35e35a464a7d11113c050955b61777a001b9cfa9a977dce6c341d60982] <==
	I1002 08:00:28.732095       1 serving.go:348] Generated self-signed cert in-memory
	I1002 08:00:31.310843       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1002 08:00:31.310870       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:00:31.334532       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 08:00:31.334560       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 08:00:31.334613       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:00:31.334621       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 08:00:31.334632       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:00:31.334638       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 08:00:31.336297       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 08:00:31.336345       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 08:00:31.436002       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 08:00:31.436070       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 08:00:31.443354       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.009335     774 topology_manager.go:215] "Topology Admit Handler" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-srr84"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.019562     774 topology_manager.go:215] "Topology Admit Handler" podUID="b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-45gx5"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.020456     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6c80228d-9d1c-4fce-8dd7-201ddba480bc-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-srr84\" (UID: \"6c80228d-9d1c-4fce-8dd7-201ddba480bc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.020651     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf87s\" (UniqueName: \"kubernetes.io/projected/6c80228d-9d1c-4fce-8dd7-201ddba480bc-kube-api-access-xf87s\") pod \"dashboard-metrics-scraper-5f989dc9cf-srr84\" (UID: \"6c80228d-9d1c-4fce-8dd7-201ddba480bc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.121349     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c5cm\" (UniqueName: \"kubernetes.io/projected/b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1-kube-api-access-7c5cm\") pod \"kubernetes-dashboard-8694d4445c-45gx5\" (UID: \"b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-45gx5"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: I1002 08:00:43.121428     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-45gx5\" (UID: \"b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-45gx5"
	Oct 02 08:00:43 old-k8s-version-356986 kubelet[774]: W1002 08:00:43.353373     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/3e0fd1abc9e195c419ec28d6bd861fb0a07ed39a5296f1f006bb183763bd7d85/crio-4ed7e9d57441f14eb6c6f0d67e2f1121142165d79a8000f3f899cc4471652f89 WatchSource:0}: Error finding container 4ed7e9d57441f14eb6c6f0d67e2f1121142165d79a8000f3f899cc4471652f89: Status 404 returned error can't find the container with id 4ed7e9d57441f14eb6c6f0d67e2f1121142165d79a8000f3f899cc4471652f89
	Oct 02 08:00:48 old-k8s-version-356986 kubelet[774]: I1002 08:00:48.619601     774 scope.go:117] "RemoveContainer" containerID="6b1a996922bb6a4697314c5721091fe9aaed52d2250d08d73dbf1abd1ee443b7"
	Oct 02 08:00:49 old-k8s-version-356986 kubelet[774]: I1002 08:00:49.623685     774 scope.go:117] "RemoveContainer" containerID="6b1a996922bb6a4697314c5721091fe9aaed52d2250d08d73dbf1abd1ee443b7"
	Oct 02 08:00:49 old-k8s-version-356986 kubelet[774]: I1002 08:00:49.631186     774 scope.go:117] "RemoveContainer" containerID="72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb"
	Oct 02 08:00:49 old-k8s-version-356986 kubelet[774]: E1002 08:00:49.631526     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-srr84_kubernetes-dashboard(6c80228d-9d1c-4fce-8dd7-201ddba480bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc"
	Oct 02 08:00:53 old-k8s-version-356986 kubelet[774]: I1002 08:00:53.310912     774 scope.go:117] "RemoveContainer" containerID="72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb"
	Oct 02 08:00:53 old-k8s-version-356986 kubelet[774]: E1002 08:00:53.311745     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-srr84_kubernetes-dashboard(6c80228d-9d1c-4fce-8dd7-201ddba480bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc"
	Oct 02 08:00:53 old-k8s-version-356986 kubelet[774]: I1002 08:00:53.325767     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-45gx5" podStartSLOduration=2.743990626 podCreationTimestamp="2025-10-02 08:00:42 +0000 UTC" firstStartedPulling="2025-10-02 08:00:43.357689709 +0000 UTC m=+19.055989252" lastFinishedPulling="2025-10-02 08:00:51.939381338 +0000 UTC m=+27.637680881" observedRunningTime="2025-10-02 08:00:52.647558236 +0000 UTC m=+28.345857778" watchObservedRunningTime="2025-10-02 08:00:53.325682255 +0000 UTC m=+29.023981797"
	Oct 02 08:01:01 old-k8s-version-356986 kubelet[774]: I1002 08:01:01.657929     774 scope.go:117] "RemoveContainer" containerID="836f4317c979eef6a650d578749a260f6ed5e3f31c262b3b74c2a01df2ed13aa"
	Oct 02 08:01:05 old-k8s-version-356986 kubelet[774]: I1002 08:01:05.481749     774 scope.go:117] "RemoveContainer" containerID="72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb"
	Oct 02 08:01:05 old-k8s-version-356986 kubelet[774]: I1002 08:01:05.672156     774 scope.go:117] "RemoveContainer" containerID="72b74dbe03dc174c92f30d47ae36279190aac15d19eed864fb226d5627c99ecb"
	Oct 02 08:01:05 old-k8s-version-356986 kubelet[774]: I1002 08:01:05.676028     774 scope.go:117] "RemoveContainer" containerID="8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a"
	Oct 02 08:01:05 old-k8s-version-356986 kubelet[774]: E1002 08:01:05.676465     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-srr84_kubernetes-dashboard(6c80228d-9d1c-4fce-8dd7-201ddba480bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc"
	Oct 02 08:01:13 old-k8s-version-356986 kubelet[774]: I1002 08:01:13.310616     774 scope.go:117] "RemoveContainer" containerID="8437b32d980c2a539b85229d47d8a4aa08bd4f891dc1e38c482942da633bb52a"
	Oct 02 08:01:13 old-k8s-version-356986 kubelet[774]: E1002 08:01:13.311693     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-srr84_kubernetes-dashboard(6c80228d-9d1c-4fce-8dd7-201ddba480bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-srr84" podUID="6c80228d-9d1c-4fce-8dd7-201ddba480bc"
	Oct 02 08:01:17 old-k8s-version-356986 kubelet[774]: I1002 08:01:17.780053     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 02 08:01:17 old-k8s-version-356986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:01:17 old-k8s-version-356986 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:01:17 old-k8s-version-356986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5d1c0fb229e1f4de7c11313a7f39ff0ac8cf227dfac5475133dcc5e3386b24f2] <==
	2025/10/02 08:00:51 Starting overwatch
	2025/10/02 08:00:51 Using namespace: kubernetes-dashboard
	2025/10/02 08:00:51 Using in-cluster config to connect to apiserver
	2025/10/02 08:00:51 Using secret token for csrf signing
	2025/10/02 08:00:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 08:00:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 08:00:52 Successful initial request to the apiserver, version: v1.28.0
	2025/10/02 08:00:52 Generating JWE encryption key
	2025/10/02 08:00:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 08:00:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 08:00:52 Initializing JWE encryption key from synchronized object
	2025/10/02 08:00:52 Creating in-cluster Sidecar client
	2025/10/02 08:00:52 Serving insecurely on HTTP port: 9090
	2025/10/02 08:00:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:01:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [836f4317c979eef6a650d578749a260f6ed5e3f31c262b3b74c2a01df2ed13aa] <==
	I1002 08:00:31.202806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 08:01:01.204731       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c8e11fae143d5af223ee8cd93022f50e9979e42cab3a78166ca1dc1c9138f36b] <==
	I1002 08:01:01.714534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 08:01:01.773805       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:01:01.774000       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 08:01:19.172151       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:01:19.172333       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-356986_2643492d-532f-4d73-b187-a74880d73580!
	I1002 08:01:19.173169       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed2987ef-dd9a-4a01-9087-8248b6747c96", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-356986_2643492d-532f-4d73-b187-a74880d73580 became leader
	I1002 08:01:19.272794       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-356986_2643492d-532f-4d73-b187-a74880d73580!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-356986 -n old-k8s-version-356986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-356986 -n old-k8s-version-356986: exit status 2 (351.296156ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-356986 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (280.297242ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:02:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-604182 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-604182 describe deploy/metrics-server -n kube-system: exit status 1 (91.533249ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-604182 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-604182
helpers_test.go:243: (dbg) docker inspect no-preload-604182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd",
	        "Created": "2025-10-02T08:01:27.464953821Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488492,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:01:27.561694599Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/hosts",
	        "LogPath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd-json.log",
	        "Name": "/no-preload-604182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-604182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-604182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd",
	                "LowerDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-604182",
	                "Source": "/var/lib/docker/volumes/no-preload-604182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-604182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-604182",
	                "name.minikube.sigs.k8s.io": "no-preload-604182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4e86c6efc7f581d93a39c91e2949a5d4c66b7410496a072826e6b25fe4631115",
	            "SandboxKey": "/var/run/docker/netns/4e86c6efc7f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-604182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:af:6f:d3:44:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b49b2bd463034ec68025fea3957066414ae3acd9986e1db0b657dcf84796d697",
	                    "EndpointID": "6fb40ffcc4405d0f7df360f8bfa8efbbffaf02495c91acc3a7b6132e57fa62da",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-604182",
	                        "eb7634b68495"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-604182 -n no-preload-604182
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-604182 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-604182 logs -n 25: (1.25488693s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-810803 sudo crio config                                                                                                                                                                                                             │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ delete  │ -p cilium-810803                                                                                                                                                                                                                              │ cilium-810803             │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │ 02 Oct 25 07:49 UTC │
	│ start   │ -p force-systemd-env-297062 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:49 UTC │                     │
	│ ssh     │ force-systemd-flag-275910 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-flag-275910                                                                                                                                                                                                                  │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-env-297062                                                                                                                                                                                                                   │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p cert-options-654417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ cert-options-654417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ -p cert-options-654417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ delete  │ -p cert-options-654417                                                                                                                                                                                                                        │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:59 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 08:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │                     │
	│ stop    │ -p old-k8s-version-356986 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-356986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:01 UTC │
	│ image   │ old-k8s-version-356986 image list --format=json                                                                                                                                                                                               │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ pause   │ -p old-k8s-version-356986 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │                     │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182         │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:02 UTC │
	│ delete  │ -p cert-expiration-759246                                                                                                                                                                                                                     │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347        │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-604182         │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:01:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:01:53.247735  491185 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:01:53.247981  491185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:01:53.248010  491185 out.go:374] Setting ErrFile to fd 2...
	I1002 08:01:53.248029  491185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:01:53.248326  491185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:01:53.248787  491185 out.go:368] Setting JSON to false
	I1002 08:01:53.249723  491185 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9865,"bootTime":1759382249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:01:53.249820  491185 start.go:140] virtualization:  
	I1002 08:01:53.254008  491185 out.go:179] * [embed-certs-171347] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:01:53.257258  491185 notify.go:220] Checking for updates...
	I1002 08:01:53.260422  491185 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:01:53.263271  491185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:01:53.266288  491185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:01:53.269174  491185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:01:53.272151  491185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:01:53.275224  491185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:01:53.279728  491185 config.go:182] Loaded profile config "no-preload-604182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:01:53.279921  491185 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:01:53.323947  491185 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:01:53.324131  491185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:01:53.480837  491185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-02 08:01:53.46851723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:01:53.480941  491185 docker.go:318] overlay module found
	I1002 08:01:53.484787  491185 out.go:179] * Using the docker driver based on user configuration
	I1002 08:01:53.487810  491185 start.go:304] selected driver: docker
	I1002 08:01:53.487833  491185 start.go:924] validating driver "docker" against <nil>
	I1002 08:01:53.487847  491185 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:01:53.488605  491185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:01:53.661773  491185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-02 08:01:53.639015076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:01:53.661935  491185 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 08:01:53.662179  491185 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:01:53.665637  491185 out.go:179] * Using Docker driver with root privileges
	I1002 08:01:53.668765  491185 cni.go:84] Creating CNI manager for ""
	I1002 08:01:53.668846  491185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:01:53.668861  491185 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 08:01:53.668942  491185 start.go:348] cluster config:
	{Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:01:53.672190  491185 out.go:179] * Starting "embed-certs-171347" primary control-plane node in "embed-certs-171347" cluster
	I1002 08:01:53.675361  491185 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:01:53.678539  491185 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:01:53.682278  491185 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:01:53.682379  491185 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:01:53.682392  491185 cache.go:58] Caching tarball of preloaded images
	I1002 08:01:53.682458  491185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:01:53.683537  491185 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:01:53.684017  491185 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:01:53.684156  491185 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/config.json ...
	I1002 08:01:53.684350  491185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/config.json: {Name:mkffdf9c9f4c62e55a74ed70ab34cefd2371dfe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:01:53.772320  491185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:01:53.772346  491185 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:01:53.772372  491185 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:01:53.772545  491185 start.go:360] acquireMachinesLock for embed-certs-171347: {Name:mk251fc9b359c61a60beaff4e6d636acffa89ca4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:01:53.773368  491185 start.go:364] duration metric: took 790.538µs to acquireMachinesLock for "embed-certs-171347"
	I1002 08:01:53.773568  491185 start.go:93] Provisioning new machine with config: &{Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:01:53.773662  491185 start.go:125] createHost starting for "" (driver="docker")
	I1002 08:01:52.469867  488189 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.603823601s)
	I1002 08:01:52.469897  488189 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1002 08:01:52.469917  488189 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 08:01:52.469968  488189 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 08:01:53.207703  488189 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 08:01:53.207734  488189 cache_images.go:124] Successfully loaded all cached images
	I1002 08:01:53.207741  488189 cache_images.go:93] duration metric: took 19.811322238s to LoadCachedImages
	I1002 08:01:53.207752  488189 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 08:01:53.207831  488189 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-604182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-604182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:01:53.207906  488189 ssh_runner.go:195] Run: crio config
	I1002 08:01:53.296362  488189 cni.go:84] Creating CNI manager for ""
	I1002 08:01:53.296384  488189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:01:53.296404  488189 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:01:53.296430  488189 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-604182 NodeName:no-preload-604182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:01:53.296560  488189 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-604182"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:01:53.296633  488189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:01:53.306777  488189 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1002 08:01:53.306947  488189 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1002 08:01:53.323193  488189 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1002 08:01:53.323284  488189 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1002 08:01:53.323427  488189 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1002 08:01:53.323760  488189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1002 08:01:53.329426  488189 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1002 08:01:53.329470  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1002 08:01:54.241430  488189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:01:54.262996  488189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1002 08:01:54.268539  488189 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1002 08:01:54.268622  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1002 08:01:54.378002  488189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1002 08:01:54.418269  488189 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1002 08:01:54.418305  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1002 08:01:54.935278  488189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:01:54.948515  488189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 08:01:54.969273  488189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:01:54.992565  488189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 08:01:55.029541  488189 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:01:55.039238  488189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:01:55.052556  488189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:01:55.227892  488189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:01:55.249084  488189 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182 for IP: 192.168.76.2
	I1002 08:01:55.249110  488189 certs.go:195] generating shared ca certs ...
	I1002 08:01:55.249127  488189 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:01:55.249310  488189 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:01:55.249377  488189 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:01:55.249393  488189 certs.go:257] generating profile certs ...
	I1002 08:01:55.249464  488189 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.key
	I1002 08:01:55.249481  488189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt with IP's: []
	I1002 08:01:56.044533  488189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt ...
	I1002 08:01:56.044568  488189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: {Name:mk40034468fe1ddc5a53de5a96472a4084f306f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:01:56.044811  488189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.key ...
	I1002 08:01:56.044828  488189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.key: {Name:mk6d68f1b7e8b084a205b1f3b839a2f98b5f1466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:01:56.044960  488189 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.key.e3932ce3
	I1002 08:01:56.044982  488189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.crt.e3932ce3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 08:01:56.743641  488189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.crt.e3932ce3 ...
	I1002 08:01:56.743675  488189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.crt.e3932ce3: {Name:mk7035cf515b28c3c278a98726ede5b921ab2928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:01:56.743910  488189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.key.e3932ce3 ...
	I1002 08:01:56.743928  488189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.key.e3932ce3: {Name:mkf7735979958f5f9ef8c3f4613807056c6e1c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:01:56.744056  488189 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.crt.e3932ce3 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.crt
	I1002 08:01:56.744170  488189 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.key.e3932ce3 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.key
	I1002 08:01:56.744253  488189 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.key
	I1002 08:01:56.744289  488189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.crt with IP's: []
	I1002 08:01:57.295297  488189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.crt ...
	I1002 08:01:57.295327  488189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.crt: {Name:mkfa341c246ad3654bd41ab3a45fdef105e0e401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:01:57.295544  488189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.key ...
	I1002 08:01:57.295563  488189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.key: {Name:mk3efaea7b04dc97b9465370e8607b4879b230c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:01:57.295779  488189 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:01:57.295842  488189 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:01:57.295858  488189 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:01:57.295887  488189 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:01:57.295942  488189 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:01:57.295972  488189 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:01:57.296037  488189 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:01:57.296664  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:01:57.335727  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:01:57.353736  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:01:57.372047  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:01:57.390072  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 08:01:57.408380  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:01:57.426816  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:01:57.445217  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 08:01:57.466404  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:01:57.485105  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:01:57.503970  488189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:01:57.522785  488189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:01:57.536882  488189 ssh_runner.go:195] Run: openssl version
	I1002 08:01:57.544503  488189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:01:57.553858  488189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:01:57.558911  488189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:01:57.559030  488189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:01:57.601583  488189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:01:57.610487  488189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:01:57.619363  488189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:01:57.627132  488189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:01:57.627200  488189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:01:57.669680  488189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:01:57.680991  488189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:01:57.690221  488189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:01:57.695467  488189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:01:57.695560  488189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:01:57.737862  488189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:01:57.747157  488189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:01:57.752797  488189 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 08:01:57.752894  488189 kubeadm.go:400] StartCluster: {Name:no-preload-604182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-604182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:01:57.752991  488189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:01:57.753079  488189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:01:57.788419  488189 cri.go:89] found id: ""
	I1002 08:01:57.788552  488189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:01:57.799403  488189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 08:01:57.808081  488189 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 08:01:57.808174  488189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 08:01:57.819213  488189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 08:01:57.819239  488189 kubeadm.go:157] found existing configuration files:
	
	I1002 08:01:57.819320  488189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 08:01:57.828859  488189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 08:01:57.828950  488189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 08:01:57.837021  488189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 08:01:57.846247  488189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 08:01:57.846367  488189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 08:01:57.854400  488189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 08:01:57.863194  488189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 08:01:57.863289  488189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 08:01:57.871629  488189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 08:01:57.880579  488189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 08:01:57.880673  488189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 08:01:57.888813  488189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 08:01:57.941143  488189 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 08:01:57.942373  488189 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 08:01:57.988528  488189 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 08:01:57.988654  488189 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 08:01:57.988695  488189 kubeadm.go:318] OS: Linux
	I1002 08:01:57.988753  488189 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 08:01:57.988816  488189 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 08:01:57.988878  488189 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 08:01:57.988947  488189 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 08:01:57.989020  488189 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 08:01:57.989093  488189 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 08:01:57.989161  488189 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 08:01:57.989238  488189 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 08:01:57.989309  488189 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 08:01:58.116600  488189 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 08:01:58.116768  488189 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 08:01:58.116900  488189 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 08:01:58.151528  488189 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 08:01:53.777545  491185 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 08:01:53.778487  491185 start.go:159] libmachine.API.Create for "embed-certs-171347" (driver="docker")
	I1002 08:01:53.778557  491185 client.go:168] LocalClient.Create starting
	I1002 08:01:53.778728  491185 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 08:01:53.778804  491185 main.go:141] libmachine: Decoding PEM data...
	I1002 08:01:53.778831  491185 main.go:141] libmachine: Parsing certificate...
	I1002 08:01:53.778904  491185 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 08:01:53.779013  491185 main.go:141] libmachine: Decoding PEM data...
	I1002 08:01:53.779042  491185 main.go:141] libmachine: Parsing certificate...
	I1002 08:01:53.779960  491185 cli_runner.go:164] Run: docker network inspect embed-certs-171347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 08:01:53.872797  491185 cli_runner.go:211] docker network inspect embed-certs-171347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 08:01:53.873023  491185 network_create.go:284] running [docker network inspect embed-certs-171347] to gather additional debugging logs...
	I1002 08:01:53.873048  491185 cli_runner.go:164] Run: docker network inspect embed-certs-171347
	W1002 08:01:53.908117  491185 cli_runner.go:211] docker network inspect embed-certs-171347 returned with exit code 1
	I1002 08:01:53.908145  491185 network_create.go:287] error running [docker network inspect embed-certs-171347]: docker network inspect embed-certs-171347: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-171347 not found
	I1002 08:01:53.908159  491185 network_create.go:289] output of [docker network inspect embed-certs-171347]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-171347 not found
	
	** /stderr **
	I1002 08:01:53.908271  491185 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:01:54.004468  491185 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 08:01:54.005274  491185 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 08:01:54.006826  491185 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 08:01:54.008867  491185 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b49b2bd46303 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:5e:f0:2d:35:72} reservation:<nil>}
	I1002 08:01:54.009881  491185 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f3620}
	I1002 08:01:54.009936  491185 network_create.go:124] attempt to create docker network embed-certs-171347 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 08:01:54.010016  491185 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-171347 embed-certs-171347
	I1002 08:01:54.209255  491185 network_create.go:108] docker network embed-certs-171347 192.168.85.0/24 created
	I1002 08:01:54.209290  491185 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-171347" container
	I1002 08:01:54.209365  491185 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 08:01:54.255349  491185 cli_runner.go:164] Run: docker volume create embed-certs-171347 --label name.minikube.sigs.k8s.io=embed-certs-171347 --label created_by.minikube.sigs.k8s.io=true
	I1002 08:01:54.304910  491185 oci.go:103] Successfully created a docker volume embed-certs-171347
	I1002 08:01:54.305003  491185 cli_runner.go:164] Run: docker run --rm --name embed-certs-171347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-171347 --entrypoint /usr/bin/test -v embed-certs-171347:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 08:01:55.477306  491185 cli_runner.go:217] Completed: docker run --rm --name embed-certs-171347-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-171347 --entrypoint /usr/bin/test -v embed-certs-171347:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (1.172259419s)
	I1002 08:01:55.477334  491185 oci.go:107] Successfully prepared a docker volume embed-certs-171347
	I1002 08:01:55.477357  491185 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:01:55.477377  491185 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 08:01:55.477454  491185 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-171347:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 08:01:58.185278  488189 out.go:252]   - Generating certificates and keys ...
	I1002 08:01:58.185473  488189 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 08:01:58.185556  488189 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 08:01:58.754634  488189 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 08:01:59.196769  488189 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 08:01:59.693595  488189 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 08:01:59.906630  488189 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 08:02:01.289395  488189 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 08:02:01.289546  488189 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-604182] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 08:02:00.423804  491185 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-171347:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.946304765s)
	I1002 08:02:00.423839  491185 kic.go:203] duration metric: took 4.946457241s to extract preloaded images to volume ...
	W1002 08:02:00.424003  491185 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 08:02:00.424147  491185 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 08:02:00.563308  491185 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-171347 --name embed-certs-171347 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-171347 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-171347 --network embed-certs-171347 --ip 192.168.85.2 --volume embed-certs-171347:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 08:02:00.990612  491185 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Running}}
	I1002 08:02:01.017969  491185 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:02:01.053650  491185 cli_runner.go:164] Run: docker exec embed-certs-171347 stat /var/lib/dpkg/alternatives/iptables
	I1002 08:02:01.115783  491185 oci.go:144] the created container "embed-certs-171347" has a running status.
	I1002 08:02:01.115822  491185 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa...
	I1002 08:02:02.158714  491185 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 08:02:02.198838  491185 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:02:02.224785  491185 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 08:02:02.224806  491185 kic_runner.go:114] Args: [docker exec --privileged embed-certs-171347 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 08:02:02.303713  491185 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:02:02.327325  491185 machine.go:93] provisionDockerMachine start ...
	I1002 08:02:02.327420  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:02.351062  491185 main.go:141] libmachine: Using SSH client type: native
	I1002 08:02:02.351420  491185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1002 08:02:02.351437  491185 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:02:02.352101  491185 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44798->127.0.0.1:33413: read: connection reset by peer
	I1002 08:02:02.600772  488189 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 08:02:02.600923  488189 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-604182] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 08:02:02.864250  488189 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 08:02:03.347858  488189 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 08:02:03.933888  488189 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 08:02:03.934422  488189 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 08:02:04.356757  488189 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 08:02:04.963355  488189 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 08:02:05.343198  488189 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 08:02:05.788324  488189 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 08:02:06.135482  488189 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 08:02:06.135589  488189 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 08:02:06.143644  488189 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 08:02:06.149957  488189 out.go:252]   - Booting up control plane ...
	I1002 08:02:06.150077  488189 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 08:02:06.150183  488189 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 08:02:06.150269  488189 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 08:02:06.207528  488189 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 08:02:06.207648  488189 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 08:02:06.207762  488189 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 08:02:06.207855  488189 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 08:02:06.207900  488189 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 08:02:05.513047  491185 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-171347
	
	I1002 08:02:05.513094  491185 ubuntu.go:182] provisioning hostname "embed-certs-171347"
	I1002 08:02:05.513161  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:05.541661  491185 main.go:141] libmachine: Using SSH client type: native
	I1002 08:02:05.542208  491185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1002 08:02:05.542233  491185 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-171347 && echo "embed-certs-171347" | sudo tee /etc/hostname
	I1002 08:02:05.734531  491185 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-171347
	
	I1002 08:02:05.734611  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:05.757648  491185 main.go:141] libmachine: Using SSH client type: native
	I1002 08:02:05.757971  491185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1002 08:02:05.757995  491185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-171347' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-171347/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-171347' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:02:05.923303  491185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:02:05.923333  491185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:02:05.923367  491185 ubuntu.go:190] setting up certificates
	I1002 08:02:05.923378  491185 provision.go:84] configureAuth start
	I1002 08:02:05.923448  491185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-171347
	I1002 08:02:05.945111  491185 provision.go:143] copyHostCerts
	I1002 08:02:05.945188  491185 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:02:05.945204  491185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:02:05.945283  491185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:02:05.945384  491185 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:02:05.945393  491185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:02:05.945423  491185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:02:05.945491  491185 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:02:05.945504  491185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:02:05.945530  491185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:02:05.945586  491185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.embed-certs-171347 san=[127.0.0.1 192.168.85.2 embed-certs-171347 localhost minikube]
	I1002 08:02:07.009886  491185 provision.go:177] copyRemoteCerts
	I1002 08:02:07.009966  491185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:02:07.010017  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:07.028587  491185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:02:07.127781  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 08:02:07.159026  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 08:02:07.194095  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:02:07.216061  491185 provision.go:87] duration metric: took 1.292662753s to configureAuth
	I1002 08:02:07.216092  491185 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:02:07.216271  491185 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:02:07.216384  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:07.244736  491185 main.go:141] libmachine: Using SSH client type: native
	I1002 08:02:07.245084  491185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1002 08:02:07.245107  491185 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:02:07.579154  491185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:02:07.579269  491185 machine.go:96] duration metric: took 5.251916444s to provisionDockerMachine
	I1002 08:02:07.579327  491185 client.go:171] duration metric: took 13.800749704s to LocalClient.Create
	I1002 08:02:07.579385  491185 start.go:167] duration metric: took 13.800893065s to libmachine.API.Create "embed-certs-171347"
	I1002 08:02:07.579436  491185 start.go:293] postStartSetup for "embed-certs-171347" (driver="docker")
	I1002 08:02:07.579482  491185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:02:07.579627  491185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:02:07.579722  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:07.605074  491185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:02:07.714420  491185 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:02:07.717767  491185 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:02:07.717798  491185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:02:07.717810  491185 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:02:07.717866  491185 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:02:07.717949  491185 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:02:07.718058  491185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:02:07.727054  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:02:07.748246  491185 start.go:296] duration metric: took 168.7685ms for postStartSetup
	I1002 08:02:07.749318  491185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-171347
	I1002 08:02:07.769011  491185 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/config.json ...
	I1002 08:02:07.769313  491185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:02:07.769367  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:07.787169  491185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:02:07.884266  491185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:02:07.888994  491185 start.go:128] duration metric: took 14.115313137s to createHost
	I1002 08:02:07.889018  491185 start.go:83] releasing machines lock for "embed-certs-171347", held for 14.115483468s
	I1002 08:02:07.889095  491185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-171347
	I1002 08:02:07.911572  491185 ssh_runner.go:195] Run: cat /version.json
	I1002 08:02:07.911636  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:07.911689  491185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:02:07.911757  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:07.929707  491185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:02:07.940623  491185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:02:08.027035  491185 ssh_runner.go:195] Run: systemctl --version
	I1002 08:02:08.124246  491185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:02:08.195958  491185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:02:08.201259  491185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:02:08.201340  491185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:02:08.236694  491185 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 08:02:08.236720  491185 start.go:495] detecting cgroup driver to use...
	I1002 08:02:08.236753  491185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:02:08.236805  491185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:02:08.272061  491185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:02:08.288756  491185 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:02:08.288823  491185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:02:08.312808  491185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:02:08.340042  491185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:02:08.565175  491185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:02:08.790394  491185 docker.go:234] disabling docker service ...
	I1002 08:02:08.790477  491185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:02:08.834438  491185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:02:08.853717  491185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:02:09.070163  491185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:02:09.257366  491185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:02:09.279876  491185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:02:09.304225  491185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:02:09.304376  491185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:02:09.313594  491185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:02:09.313746  491185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:02:09.328345  491185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:02:09.337709  491185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:02:09.357777  491185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:02:09.372100  491185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:02:09.385320  491185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:02:09.403758  491185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:02:09.425786  491185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:02:09.437077  491185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:02:09.445061  491185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:02:09.626976  491185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:02:09.814167  491185 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:02:09.814330  491185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:02:09.818692  491185 start.go:563] Will wait 60s for crictl version
	I1002 08:02:09.818810  491185 ssh_runner.go:195] Run: which crictl
	I1002 08:02:09.827487  491185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:02:09.868195  491185 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:02:09.868348  491185 ssh_runner.go:195] Run: crio --version
	I1002 08:02:09.905187  491185 ssh_runner.go:195] Run: crio --version
	I1002 08:02:09.945224  491185 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 08:02:06.399155  488189 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 08:02:06.399284  488189 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 08:02:07.899988  488189 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500949147s
	I1002 08:02:07.904811  488189 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 08:02:07.904943  488189 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 08:02:07.905041  488189 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 08:02:07.905150  488189 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 08:02:09.948122  491185 cli_runner.go:164] Run: docker network inspect embed-certs-171347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:02:09.974549  491185 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 08:02:09.978866  491185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:02:09.993475  491185 kubeadm.go:883] updating cluster {Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:02:09.993580  491185 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:02:09.993635  491185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:02:10.049661  491185 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:02:10.049682  491185 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:02:10.049753  491185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:02:10.103816  491185 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:02:10.103838  491185 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:02:10.103846  491185 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 08:02:10.104633  491185 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-171347 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:02:10.104724  491185 ssh_runner.go:195] Run: crio config
	I1002 08:02:10.230424  491185 cni.go:84] Creating CNI manager for ""
	I1002 08:02:10.230496  491185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:02:10.230529  491185 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:02:10.230581  491185 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-171347 NodeName:embed-certs-171347 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:02:10.230746  491185 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-171347"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:02:10.230836  491185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:02:10.244085  491185 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:02:10.244196  491185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:02:10.260855  491185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1002 08:02:10.293719  491185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:02:10.316210  491185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1002 08:02:10.334621  491185 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:02:10.338649  491185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:02:10.350450  491185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:02:10.552256  491185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:02:10.577534  491185 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347 for IP: 192.168.85.2
	I1002 08:02:10.577610  491185 certs.go:195] generating shared ca certs ...
	I1002 08:02:10.577640  491185 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:10.577818  491185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:02:10.577888  491185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:02:10.577925  491185 certs.go:257] generating profile certs ...
	I1002 08:02:10.578005  491185 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/client.key
	I1002 08:02:10.578045  491185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/client.crt with IP's: []
	I1002 08:02:11.034239  491185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/client.crt ...
	I1002 08:02:11.034275  491185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/client.crt: {Name:mk0acc53a169421e3f2875612c09d1be96589558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:11.034480  491185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/client.key ...
	I1002 08:02:11.034493  491185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/client.key: {Name:mkf2d327a754355ff71c88141ec86c9db907943a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:11.034590  491185 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key.2c92e75c
	I1002 08:02:11.034610  491185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.crt.2c92e75c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 08:02:11.072712  491185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.crt.2c92e75c ...
	I1002 08:02:11.072747  491185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.crt.2c92e75c: {Name:mkb7b1739b6448278eaf42d0f27e0421920c4d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:11.072925  491185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key.2c92e75c ...
	I1002 08:02:11.072943  491185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key.2c92e75c: {Name:mk2d38c67bd85869d7a3561d917601abf3185841 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:11.073030  491185 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.crt.2c92e75c -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.crt
	I1002 08:02:11.073111  491185 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key.2c92e75c -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key
	I1002 08:02:11.073176  491185 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.key
	I1002 08:02:11.073195  491185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.crt with IP's: []
	I1002 08:02:11.562635  491185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.crt ...
	I1002 08:02:11.562714  491185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.crt: {Name:mka35b6e7f3b35112ef2d30c7fd85ab78a976f53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:11.562953  491185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.key ...
	I1002 08:02:11.562993  491185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.key: {Name:mkae9750b305c9942316fb96f4eaa7538bdc1ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:11.563249  491185 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:02:11.563323  491185 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:02:11.563353  491185 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:02:11.563410  491185 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:02:11.563454  491185 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:02:11.563503  491185 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:02:11.563571  491185 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:02:11.564302  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:02:11.581963  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:02:11.600732  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:02:11.618658  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:02:11.637092  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 08:02:11.655978  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:02:11.674476  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:02:11.692728  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 08:02:11.715908  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:02:11.744745  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:02:11.776350  491185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:02:11.805970  491185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:02:11.833347  491185 ssh_runner.go:195] Run: openssl version
	I1002 08:02:11.846198  491185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:02:11.857786  491185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:02:11.867822  491185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:02:11.867971  491185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:02:11.936019  491185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:02:11.948391  491185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:02:11.962097  491185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:02:11.971779  491185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:02:11.971901  491185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:02:12.035729  491185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:02:12.045037  491185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:02:12.054068  491185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:02:12.058910  491185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:02:12.059028  491185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:02:12.113910  491185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:02:12.126536  491185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:02:12.134250  491185 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 08:02:12.134368  491185 kubeadm.go:400] StartCluster: {Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:02:12.134520  491185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:02:12.134618  491185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:02:12.221567  491185 cri.go:89] found id: ""
	I1002 08:02:12.221727  491185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:02:12.237850  491185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 08:02:12.262855  491185 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 08:02:12.262974  491185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 08:02:12.280880  491185 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 08:02:12.280965  491185 kubeadm.go:157] found existing configuration files:
	
	I1002 08:02:12.281053  491185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 08:02:12.295974  491185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 08:02:12.296092  491185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 08:02:12.308260  491185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 08:02:12.320059  491185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 08:02:12.320176  491185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 08:02:12.332220  491185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 08:02:12.345882  491185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 08:02:12.346002  491185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 08:02:12.355029  491185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 08:02:12.365807  491185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 08:02:12.365925  491185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 08:02:12.376700  491185 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 08:02:12.452152  491185 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 08:02:12.452295  491185 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 08:02:12.480828  491185 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 08:02:12.480959  491185 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 08:02:12.481031  491185 kubeadm.go:318] OS: Linux
	I1002 08:02:12.481142  491185 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 08:02:12.481238  491185 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 08:02:12.481335  491185 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 08:02:12.481396  491185 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 08:02:12.481455  491185 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 08:02:12.481509  491185 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 08:02:12.481560  491185 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 08:02:12.481614  491185 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 08:02:12.481667  491185 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 08:02:12.615495  491185 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 08:02:12.615733  491185 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 08:02:12.615881  491185 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 08:02:12.631527  491185 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 08:02:12.637332  491185 out.go:252]   - Generating certificates and keys ...
	I1002 08:02:12.637533  491185 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 08:02:12.637653  491185 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 08:02:13.103170  491185 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 08:02:13.652481  491185 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 08:02:14.552751  491185 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 08:02:14.995175  491185 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 08:02:15.352443  491185 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 08:02:15.352776  491185 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-171347 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:02:15.490811  491185 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 08:02:15.491131  491185 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-171347 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:02:16.318860  491185 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 08:02:16.643511  491185 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 08:02:17.061170  491185 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 08:02:17.063469  491185 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 08:02:17.815428  491185 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 08:02:18.030371  491185 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 08:02:18.221130  491185 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 08:02:18.757929  491185 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 08:02:18.969921  491185 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 08:02:18.970810  491185 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 08:02:18.973567  491185 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 08:02:16.549926  488189 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 8.644321678s
	I1002 08:02:18.292444  488189 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 10.38769952s
	I1002 08:02:19.907636  488189 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 12.002828093s
	I1002 08:02:19.943808  488189 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 08:02:19.982877  488189 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 08:02:20.008720  488189 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 08:02:20.008932  488189 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-604182 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 08:02:20.025836  488189 kubeadm.go:318] [bootstrap-token] Using token: pn8jyw.009qi0f8ktp3skok
	I1002 08:02:20.028787  488189 out.go:252]   - Configuring RBAC rules ...
	I1002 08:02:20.028926  488189 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 08:02:20.044412  488189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 08:02:20.065257  488189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 08:02:20.071480  488189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 08:02:20.079575  488189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 08:02:20.087772  488189 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 08:02:20.315473  488189 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 08:02:20.789866  488189 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 08:02:21.320510  488189 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 08:02:21.321679  488189 kubeadm.go:318] 
	I1002 08:02:21.321771  488189 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 08:02:21.321783  488189 kubeadm.go:318] 
	I1002 08:02:21.321864  488189 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 08:02:21.321874  488189 kubeadm.go:318] 
	I1002 08:02:21.321900  488189 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 08:02:21.321966  488189 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 08:02:21.322025  488189 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 08:02:21.322036  488189 kubeadm.go:318] 
	I1002 08:02:21.322097  488189 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 08:02:21.322106  488189 kubeadm.go:318] 
	I1002 08:02:21.322166  488189 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 08:02:21.322177  488189 kubeadm.go:318] 
	I1002 08:02:21.322231  488189 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 08:02:21.322314  488189 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 08:02:21.322389  488189 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 08:02:21.322405  488189 kubeadm.go:318] 
	I1002 08:02:21.322493  488189 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 08:02:21.322580  488189 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 08:02:21.322589  488189 kubeadm.go:318] 
	I1002 08:02:21.322677  488189 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pn8jyw.009qi0f8ktp3skok \
	I1002 08:02:21.322789  488189 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 08:02:21.323064  488189 kubeadm.go:318] 	--control-plane 
	I1002 08:02:21.323118  488189 kubeadm.go:318] 
	I1002 08:02:21.323210  488189 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 08:02:21.323221  488189 kubeadm.go:318] 
	I1002 08:02:21.323314  488189 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pn8jyw.009qi0f8ktp3skok \
	I1002 08:02:21.323428  488189 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 08:02:21.333928  488189 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 08:02:21.334185  488189 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 08:02:21.334303  488189 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 08:02:21.334326  488189 cni.go:84] Creating CNI manager for ""
	I1002 08:02:21.334339  488189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:02:21.337554  488189 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 08:02:18.976120  491185 out.go:252]   - Booting up control plane ...
	I1002 08:02:18.976282  491185 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 08:02:18.976413  491185 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 08:02:18.976939  491185 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 08:02:18.993407  491185 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 08:02:18.993527  491185 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 08:02:19.001860  491185 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 08:02:19.002526  491185 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 08:02:19.002680  491185 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 08:02:19.171346  491185 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 08:02:19.171480  491185 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 08:02:20.175463  491185 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001891674s
	I1002 08:02:20.176710  491185 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 08:02:20.176955  491185 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 08:02:20.177053  491185 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 08:02:20.177136  491185 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 08:02:21.340461  488189 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 08:02:21.351830  488189 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 08:02:21.351857  488189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 08:02:21.390765  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 08:02:21.974668  488189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 08:02:21.974894  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-604182 minikube.k8s.io/updated_at=2025_10_02T08_02_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=no-preload-604182 minikube.k8s.io/primary=true
	I1002 08:02:21.974943  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:22.329155  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:22.329274  488189 ops.go:34] apiserver oom_adj: -16
	I1002 08:02:22.829999  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:23.329306  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:23.829583  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:24.329416  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:24.829547  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:25.329736  488189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:25.652276  488189 kubeadm.go:1113] duration metric: took 3.677500122s to wait for elevateKubeSystemPrivileges
	I1002 08:02:25.652317  488189 kubeadm.go:402] duration metric: took 27.899446499s to StartCluster
	I1002 08:02:25.652336  488189 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:25.652401  488189 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:02:25.653051  488189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:25.653264  488189 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:02:25.653378  488189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 08:02:25.653603  488189 config.go:182] Loaded profile config "no-preload-604182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:02:25.653637  488189 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:02:25.653714  488189 addons.go:69] Setting storage-provisioner=true in profile "no-preload-604182"
	I1002 08:02:25.653726  488189 addons.go:69] Setting default-storageclass=true in profile "no-preload-604182"
	I1002 08:02:25.653751  488189 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-604182"
	I1002 08:02:25.653728  488189 addons.go:238] Setting addon storage-provisioner=true in "no-preload-604182"
	I1002 08:02:25.653885  488189 host.go:66] Checking if "no-preload-604182" exists ...
	I1002 08:02:25.654093  488189 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:02:25.654349  488189 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:02:25.661214  488189 out.go:179] * Verifying Kubernetes components...
	I1002 08:02:25.664291  488189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:02:25.702095  488189 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:02:25.703808  488189 addons.go:238] Setting addon default-storageclass=true in "no-preload-604182"
	I1002 08:02:25.703851  488189 host.go:66] Checking if "no-preload-604182" exists ...
	I1002 08:02:25.704275  488189 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:02:25.708821  488189 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:02:25.708846  488189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:02:25.708909  488189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:02:25.741514  488189 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:02:25.741536  488189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:02:25.741600  488189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:02:25.791238  488189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:02:25.808855  488189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:02:24.736835  491185 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.559266851s
	I1002 08:02:26.331559  488189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 08:02:26.378346  488189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:02:26.618589  488189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:02:26.672538  488189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:02:28.003276  488189 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.671621535s)
	I1002 08:02:28.003462  488189 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 08:02:28.003375  488189 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.624962735s)
	I1002 08:02:28.003402  488189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.384742187s)
	I1002 08:02:28.004351  488189 node_ready.go:35] waiting up to 6m0s for node "no-preload-604182" to be "Ready" ...
	I1002 08:02:28.527610  488189 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-604182" context rescaled to 1 replicas
	I1002 08:02:28.540249  488189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.867621877s)
	I1002 08:02:28.543255  488189 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1002 08:02:30.539747  491185 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 10.362685186s
	I1002 08:02:30.681194  491185 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.501805415s
	I1002 08:02:30.707286  491185 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 08:02:30.733930  491185 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 08:02:30.757279  491185 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 08:02:30.757798  491185 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-171347 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 08:02:30.774248  491185 kubeadm.go:318] [bootstrap-token] Using token: 5jg1id.u7n2ztio7iv8x385
	I1002 08:02:28.546226  488189 addons.go:514] duration metric: took 2.892568813s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1002 08:02:30.008568  488189 node_ready.go:57] node "no-preload-604182" has "Ready":"False" status (will retry)
	I1002 08:02:30.777440  491185 out.go:252]   - Configuring RBAC rules ...
	I1002 08:02:30.779037  491185 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 08:02:30.792437  491185 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 08:02:30.805491  491185 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 08:02:30.811696  491185 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 08:02:30.819689  491185 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 08:02:30.825693  491185 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 08:02:31.087442  491185 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 08:02:31.579299  491185 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 08:02:32.088572  491185 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 08:02:32.089859  491185 kubeadm.go:318] 
	I1002 08:02:32.089938  491185 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 08:02:32.089953  491185 kubeadm.go:318] 
	I1002 08:02:32.090034  491185 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 08:02:32.090044  491185 kubeadm.go:318] 
	I1002 08:02:32.090070  491185 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 08:02:32.090155  491185 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 08:02:32.090216  491185 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 08:02:32.090225  491185 kubeadm.go:318] 
	I1002 08:02:32.090281  491185 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 08:02:32.090290  491185 kubeadm.go:318] 
	I1002 08:02:32.090340  491185 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 08:02:32.090347  491185 kubeadm.go:318] 
	I1002 08:02:32.090408  491185 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 08:02:32.090489  491185 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 08:02:32.090564  491185 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 08:02:32.090575  491185 kubeadm.go:318] 
	I1002 08:02:32.090662  491185 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 08:02:32.090746  491185 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 08:02:32.090754  491185 kubeadm.go:318] 
	I1002 08:02:32.090842  491185 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5jg1id.u7n2ztio7iv8x385 \
	I1002 08:02:32.090951  491185 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 08:02:32.090977  491185 kubeadm.go:318] 	--control-plane 
	I1002 08:02:32.090985  491185 kubeadm.go:318] 
	I1002 08:02:32.091073  491185 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 08:02:32.091117  491185 kubeadm.go:318] 
	I1002 08:02:32.091203  491185 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5jg1id.u7n2ztio7iv8x385 \
	I1002 08:02:32.091313  491185 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 08:02:32.095532  491185 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 08:02:32.095769  491185 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 08:02:32.095885  491185 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 08:02:32.096238  491185 cni.go:84] Creating CNI manager for ""
	I1002 08:02:32.096300  491185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:02:32.099342  491185 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 08:02:32.102271  491185 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 08:02:32.110003  491185 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 08:02:32.110027  491185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 08:02:32.133407  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 08:02:32.848726  491185 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 08:02:32.848870  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:32.848960  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-171347 minikube.k8s.io/updated_at=2025_10_02T08_02_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=embed-certs-171347 minikube.k8s.io/primary=true
	I1002 08:02:33.059158  491185 ops.go:34] apiserver oom_adj: -16
	I1002 08:02:33.059284  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1002 08:02:32.013090  488189 node_ready.go:57] node "no-preload-604182" has "Ready":"False" status (will retry)
	W1002 08:02:34.507777  488189 node_ready.go:57] node "no-preload-604182" has "Ready":"False" status (will retry)
	I1002 08:02:33.559368  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:34.059856  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:34.559919  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:35.059437  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:35.560328  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:36.059861  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:36.560202  491185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:02:36.669478  491185 kubeadm.go:1113] duration metric: took 3.820657544s to wait for elevateKubeSystemPrivileges
	I1002 08:02:36.669510  491185 kubeadm.go:402] duration metric: took 24.535145208s to StartCluster
	I1002 08:02:36.669529  491185 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:36.669595  491185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:02:36.671002  491185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:02:36.671326  491185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 08:02:36.671341  491185 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:02:36.671587  491185 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:02:36.671621  491185 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:02:36.671679  491185 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-171347"
	I1002 08:02:36.671696  491185 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-171347"
	I1002 08:02:36.671717  491185 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:02:36.672211  491185 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:02:36.672364  491185 addons.go:69] Setting default-storageclass=true in profile "embed-certs-171347"
	I1002 08:02:36.672378  491185 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-171347"
	I1002 08:02:36.672613  491185 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:02:36.675170  491185 out.go:179] * Verifying Kubernetes components...
	I1002 08:02:36.678120  491185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:02:36.732169  491185 addons.go:238] Setting addon default-storageclass=true in "embed-certs-171347"
	I1002 08:02:36.732228  491185 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:02:36.732679  491185 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:02:36.734752  491185 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:02:36.737656  491185 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:02:36.737702  491185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:02:36.737782  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:36.771245  491185 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:02:36.771273  491185 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:02:36.771336  491185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:02:36.773274  491185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:02:36.806063  491185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:02:36.992954  491185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 08:02:37.046734  491185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:02:37.131077  491185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:02:37.193657  491185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:02:37.623300  491185 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 08:02:37.624516  491185 node_ready.go:35] waiting up to 6m0s for node "embed-certs-171347" to be "Ready" ...
	I1002 08:02:37.954821  491185 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 08:02:37.957730  491185 addons.go:514] duration metric: took 1.286075273s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 08:02:38.128176  491185 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-171347" context rescaled to 1 replicas
	W1002 08:02:37.007819  488189 node_ready.go:57] node "no-preload-604182" has "Ready":"False" status (will retry)
	W1002 08:02:39.009337  488189 node_ready.go:57] node "no-preload-604182" has "Ready":"False" status (will retry)
	W1002 08:02:39.627936  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	W1002 08:02:42.130769  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	W1002 08:02:41.507401  488189 node_ready.go:57] node "no-preload-604182" has "Ready":"False" status (will retry)
	I1002 08:02:42.507199  488189 node_ready.go:49] node "no-preload-604182" is "Ready"
	I1002 08:02:42.507233  488189 node_ready.go:38] duration metric: took 14.502859346s for node "no-preload-604182" to be "Ready" ...
	I1002 08:02:42.507248  488189 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:02:42.507314  488189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:02:42.520270  488189 api_server.go:72] duration metric: took 16.866969661s to wait for apiserver process to appear ...
	I1002 08:02:42.520308  488189 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:02:42.520329  488189 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 08:02:42.528465  488189 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 08:02:42.530528  488189 api_server.go:141] control plane version: v1.34.1
	I1002 08:02:42.530559  488189 api_server.go:131] duration metric: took 10.243188ms to wait for apiserver health ...
	I1002 08:02:42.530569  488189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:02:42.533878  488189 system_pods.go:59] 8 kube-system pods found
	I1002 08:02:42.533915  488189 system_pods.go:61] "coredns-66bc5c9577-74zfp" [0aa93160-9105-470c-b62f-c8d0949da486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:02:42.533923  488189 system_pods.go:61] "etcd-no-preload-604182" [3f1d8eef-d3ec-41bd-856b-dd8687a2862e] Running
	I1002 08:02:42.533928  488189 system_pods.go:61] "kindnet-5zjv7" [578c1406-9933-4a37-9826-4f696b5a3e38] Running
	I1002 08:02:42.533932  488189 system_pods.go:61] "kube-apiserver-no-preload-604182" [51e008fc-06a1-447a-b35a-ac2dc7470dad] Running
	I1002 08:02:42.533938  488189 system_pods.go:61] "kube-controller-manager-no-preload-604182" [732af5c5-135d-4cc9-9df1-ae4053eae345] Running
	I1002 08:02:42.533954  488189 system_pods.go:61] "kube-proxy-qn6pp" [fe309cb8-ddea-4301-a231-bf301f3e25d6] Running
	I1002 08:02:42.533960  488189 system_pods.go:61] "kube-scheduler-no-preload-604182" [659bcac4-deed-4fa0-ae78-72e9e27c83da] Running
	I1002 08:02:42.533966  488189 system_pods.go:61] "storage-provisioner" [323f91d2-af51-47c9-8da6-0768f4dc30ab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:02:42.533983  488189 system_pods.go:74] duration metric: took 3.407326ms to wait for pod list to return data ...
	I1002 08:02:42.533992  488189 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:02:42.543314  488189 default_sa.go:45] found service account: "default"
	I1002 08:02:42.543346  488189 default_sa.go:55] duration metric: took 9.346277ms for default service account to be created ...
	I1002 08:02:42.543356  488189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:02:42.551264  488189 system_pods.go:86] 8 kube-system pods found
	I1002 08:02:42.551304  488189 system_pods.go:89] "coredns-66bc5c9577-74zfp" [0aa93160-9105-470c-b62f-c8d0949da486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:02:42.551311  488189 system_pods.go:89] "etcd-no-preload-604182" [3f1d8eef-d3ec-41bd-856b-dd8687a2862e] Running
	I1002 08:02:42.551317  488189 system_pods.go:89] "kindnet-5zjv7" [578c1406-9933-4a37-9826-4f696b5a3e38] Running
	I1002 08:02:42.551321  488189 system_pods.go:89] "kube-apiserver-no-preload-604182" [51e008fc-06a1-447a-b35a-ac2dc7470dad] Running
	I1002 08:02:42.551326  488189 system_pods.go:89] "kube-controller-manager-no-preload-604182" [732af5c5-135d-4cc9-9df1-ae4053eae345] Running
	I1002 08:02:42.551330  488189 system_pods.go:89] "kube-proxy-qn6pp" [fe309cb8-ddea-4301-a231-bf301f3e25d6] Running
	I1002 08:02:42.551335  488189 system_pods.go:89] "kube-scheduler-no-preload-604182" [659bcac4-deed-4fa0-ae78-72e9e27c83da] Running
	I1002 08:02:42.551343  488189 system_pods.go:89] "storage-provisioner" [323f91d2-af51-47c9-8da6-0768f4dc30ab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:02:42.551366  488189 retry.go:31] will retry after 255.909212ms: missing components: kube-dns
	I1002 08:02:42.811331  488189 system_pods.go:86] 8 kube-system pods found
	I1002 08:02:42.811368  488189 system_pods.go:89] "coredns-66bc5c9577-74zfp" [0aa93160-9105-470c-b62f-c8d0949da486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:02:42.811375  488189 system_pods.go:89] "etcd-no-preload-604182" [3f1d8eef-d3ec-41bd-856b-dd8687a2862e] Running
	I1002 08:02:42.811382  488189 system_pods.go:89] "kindnet-5zjv7" [578c1406-9933-4a37-9826-4f696b5a3e38] Running
	I1002 08:02:42.811386  488189 system_pods.go:89] "kube-apiserver-no-preload-604182" [51e008fc-06a1-447a-b35a-ac2dc7470dad] Running
	I1002 08:02:42.811391  488189 system_pods.go:89] "kube-controller-manager-no-preload-604182" [732af5c5-135d-4cc9-9df1-ae4053eae345] Running
	I1002 08:02:42.811395  488189 system_pods.go:89] "kube-proxy-qn6pp" [fe309cb8-ddea-4301-a231-bf301f3e25d6] Running
	I1002 08:02:42.811399  488189 system_pods.go:89] "kube-scheduler-no-preload-604182" [659bcac4-deed-4fa0-ae78-72e9e27c83da] Running
	I1002 08:02:42.811405  488189 system_pods.go:89] "storage-provisioner" [323f91d2-af51-47c9-8da6-0768f4dc30ab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:02:42.811419  488189 retry.go:31] will retry after 256.315833ms: missing components: kube-dns
	I1002 08:02:43.072577  488189 system_pods.go:86] 8 kube-system pods found
	I1002 08:02:43.072693  488189 system_pods.go:89] "coredns-66bc5c9577-74zfp" [0aa93160-9105-470c-b62f-c8d0949da486] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:02:43.072778  488189 system_pods.go:89] "etcd-no-preload-604182" [3f1d8eef-d3ec-41bd-856b-dd8687a2862e] Running
	I1002 08:02:43.072822  488189 system_pods.go:89] "kindnet-5zjv7" [578c1406-9933-4a37-9826-4f696b5a3e38] Running
	I1002 08:02:43.072852  488189 system_pods.go:89] "kube-apiserver-no-preload-604182" [51e008fc-06a1-447a-b35a-ac2dc7470dad] Running
	I1002 08:02:43.072894  488189 system_pods.go:89] "kube-controller-manager-no-preload-604182" [732af5c5-135d-4cc9-9df1-ae4053eae345] Running
	I1002 08:02:43.072942  488189 system_pods.go:89] "kube-proxy-qn6pp" [fe309cb8-ddea-4301-a231-bf301f3e25d6] Running
	I1002 08:02:43.072964  488189 system_pods.go:89] "kube-scheduler-no-preload-604182" [659bcac4-deed-4fa0-ae78-72e9e27c83da] Running
	I1002 08:02:43.073035  488189 system_pods.go:89] "storage-provisioner" [323f91d2-af51-47c9-8da6-0768f4dc30ab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:02:43.073073  488189 retry.go:31] will retry after 404.542407ms: missing components: kube-dns
	I1002 08:02:43.481757  488189 system_pods.go:86] 8 kube-system pods found
	I1002 08:02:43.481791  488189 system_pods.go:89] "coredns-66bc5c9577-74zfp" [0aa93160-9105-470c-b62f-c8d0949da486] Running
	I1002 08:02:43.481798  488189 system_pods.go:89] "etcd-no-preload-604182" [3f1d8eef-d3ec-41bd-856b-dd8687a2862e] Running
	I1002 08:02:43.481803  488189 system_pods.go:89] "kindnet-5zjv7" [578c1406-9933-4a37-9826-4f696b5a3e38] Running
	I1002 08:02:43.481808  488189 system_pods.go:89] "kube-apiserver-no-preload-604182" [51e008fc-06a1-447a-b35a-ac2dc7470dad] Running
	I1002 08:02:43.481814  488189 system_pods.go:89] "kube-controller-manager-no-preload-604182" [732af5c5-135d-4cc9-9df1-ae4053eae345] Running
	I1002 08:02:43.481818  488189 system_pods.go:89] "kube-proxy-qn6pp" [fe309cb8-ddea-4301-a231-bf301f3e25d6] Running
	I1002 08:02:43.481822  488189 system_pods.go:89] "kube-scheduler-no-preload-604182" [659bcac4-deed-4fa0-ae78-72e9e27c83da] Running
	I1002 08:02:43.481825  488189 system_pods.go:89] "storage-provisioner" [323f91d2-af51-47c9-8da6-0768f4dc30ab] Running
	I1002 08:02:43.481833  488189 system_pods.go:126] duration metric: took 938.471954ms to wait for k8s-apps to be running ...
	I1002 08:02:43.481842  488189 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:02:43.481904  488189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:02:43.497279  488189 system_svc.go:56] duration metric: took 15.424396ms WaitForService to wait for kubelet
	I1002 08:02:43.497312  488189 kubeadm.go:586] duration metric: took 17.844015229s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:02:43.497333  488189 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:02:43.500712  488189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:02:43.500748  488189 node_conditions.go:123] node cpu capacity is 2
	I1002 08:02:43.500763  488189 node_conditions.go:105] duration metric: took 3.424064ms to run NodePressure ...
	I1002 08:02:43.500776  488189 start.go:241] waiting for startup goroutines ...
	I1002 08:02:43.500784  488189 start.go:246] waiting for cluster config update ...
	I1002 08:02:43.500797  488189 start.go:255] writing updated cluster config ...
	I1002 08:02:43.501100  488189 ssh_runner.go:195] Run: rm -f paused
	I1002 08:02:43.505863  488189 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:02:43.510002  488189 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-74zfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:43.515436  488189 pod_ready.go:94] pod "coredns-66bc5c9577-74zfp" is "Ready"
	I1002 08:02:43.515480  488189 pod_ready.go:86] duration metric: took 5.451767ms for pod "coredns-66bc5c9577-74zfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:43.518318  488189 pod_ready.go:83] waiting for pod "etcd-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:43.523075  488189 pod_ready.go:94] pod "etcd-no-preload-604182" is "Ready"
	I1002 08:02:43.523196  488189 pod_ready.go:86] duration metric: took 4.843368ms for pod "etcd-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:43.525655  488189 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:43.532419  488189 pod_ready.go:94] pod "kube-apiserver-no-preload-604182" is "Ready"
	I1002 08:02:43.532450  488189 pod_ready.go:86] duration metric: took 6.766322ms for pod "kube-apiserver-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:43.534977  488189 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:43.910423  488189 pod_ready.go:94] pod "kube-controller-manager-no-preload-604182" is "Ready"
	I1002 08:02:43.910453  488189 pod_ready.go:86] duration metric: took 375.448689ms for pod "kube-controller-manager-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:44.110638  488189 pod_ready.go:83] waiting for pod "kube-proxy-qn6pp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:44.509987  488189 pod_ready.go:94] pod "kube-proxy-qn6pp" is "Ready"
	I1002 08:02:44.510018  488189 pod_ready.go:86] duration metric: took 399.350489ms for pod "kube-proxy-qn6pp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:44.710064  488189 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:45.114522  488189 pod_ready.go:94] pod "kube-scheduler-no-preload-604182" is "Ready"
	I1002 08:02:45.114604  488189 pod_ready.go:86] duration metric: took 404.511102ms for pod "kube-scheduler-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:02:45.114636  488189 pod_ready.go:40] duration metric: took 1.608732496s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:02:45.270799  488189 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:02:45.274782  488189 out.go:179] * Done! kubectl is now configured to use "no-preload-604182" cluster and "default" namespace by default
	W1002 08:02:44.627350  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	W1002 08:02:46.628066  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	W1002 08:02:49.128065  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	W1002 08:02:51.128352  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 02 08:02:42 no-preload-604182 crio[839]: time="2025-10-02T08:02:42.708117967Z" level=info msg="Created container 2ac01c0607af4ed57a984e5bcef75eb580a4ddf396d3004531ab44fb2e66e7c4: kube-system/coredns-66bc5c9577-74zfp/coredns" id=b56f172f-fe3d-4d6d-b003-3d589dd42a7a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:02:42 no-preload-604182 crio[839]: time="2025-10-02T08:02:42.709015403Z" level=info msg="Starting container: 2ac01c0607af4ed57a984e5bcef75eb580a4ddf396d3004531ab44fb2e66e7c4" id=3e108a88-0962-40ed-b75d-ebf74f997464 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:02:42 no-preload-604182 crio[839]: time="2025-10-02T08:02:42.711136227Z" level=info msg="Started container" PID=2477 containerID=2ac01c0607af4ed57a984e5bcef75eb580a4ddf396d3004531ab44fb2e66e7c4 description=kube-system/coredns-66bc5c9577-74zfp/coredns id=3e108a88-0962-40ed-b75d-ebf74f997464 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09d54bb2047c138704cf7b333e253a95f878f83941bc839a55f3161b390df337
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.919245769Z" level=info msg="Running pod sandbox: default/busybox/POD" id=409d5760-ea0b-43ac-9401-f83f50a86e18 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.919318434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.927582026Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:596d73b58721f134afe5eeb58069d3e15a66db95467d9ba291f53fd93ef00716 UID:79649b38-6d08-4670-b939-ea8b9b38a4ad NetNS:/var/run/netns/4da52db3-f49f-4377-833a-d6adb40ec8e4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004b6c28}] Aliases:map[]}"
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.927623799Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.937614381Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:596d73b58721f134afe5eeb58069d3e15a66db95467d9ba291f53fd93ef00716 UID:79649b38-6d08-4670-b939-ea8b9b38a4ad NetNS:/var/run/netns/4da52db3-f49f-4377-833a-d6adb40ec8e4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004b6c28}] Aliases:map[]}"
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.937770025Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.942053757Z" level=info msg="Ran pod sandbox 596d73b58721f134afe5eeb58069d3e15a66db95467d9ba291f53fd93ef00716 with infra container: default/busybox/POD" id=409d5760-ea0b-43ac-9401-f83f50a86e18 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.943549599Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c60369c1-87d5-4ab2-86c9-e3d64db3db38 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.943721637Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c60369c1-87d5-4ab2-86c9-e3d64db3db38 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.943773002Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c60369c1-87d5-4ab2-86c9-e3d64db3db38 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.94669883Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=563da26b-d88f-41c5-978a-42d37c249f82 name=/runtime.v1.ImageService/PullImage
	Oct 02 08:02:45 no-preload-604182 crio[839]: time="2025-10-02T08:02:45.951397818Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.906672909Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=563da26b-d88f-41c5-978a-42d37c249f82 name=/runtime.v1.ImageService/PullImage
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.907582423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b2d053a2-b4ea-4e27-b3df-dc8eb2995ba6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.909971172Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ffab7a77-c95b-49d9-8fa4-aceead8239ca name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.91582574Z" level=info msg="Creating container: default/busybox/busybox" id=c7577cf9-9bbd-44bd-b631-1f02e67b769e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.916611011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.921274939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.921755231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.937072533Z" level=info msg="Created container 365bf065cde8856d77951a8557f5a7c56958f6239ed02eb82b06bac9fdc9499c: default/busybox/busybox" id=c7577cf9-9bbd-44bd-b631-1f02e67b769e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.939339581Z" level=info msg="Starting container: 365bf065cde8856d77951a8557f5a7c56958f6239ed02eb82b06bac9fdc9499c" id=e479f239-fb1e-4b97-aefe-84c28ad06ef6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:02:47 no-preload-604182 crio[839]: time="2025-10-02T08:02:47.942380192Z" level=info msg="Started container" PID=2530 containerID=365bf065cde8856d77951a8557f5a7c56958f6239ed02eb82b06bac9fdc9499c description=default/busybox/busybox id=e479f239-fb1e-4b97-aefe-84c28ad06ef6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=596d73b58721f134afe5eeb58069d3e15a66db95467d9ba291f53fd93ef00716
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	365bf065cde88       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   596d73b58721f       busybox                                     default
	2ac01c0607af4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   09d54bb2047c1       coredns-66bc5c9577-74zfp                    kube-system
	9c6fa9eb18d94       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   5945404fee9bc       storage-provisioner                         kube-system
	ac9fbce67c319       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   2effd00779bf8       kindnet-5zjv7                               kube-system
	80255440c9e6a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   6b1958af3b867       kube-proxy-qn6pp                            kube-system
	31139242ede69       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      46 seconds ago      Running             kube-scheduler            0                   72a85d26d01de       kube-scheduler-no-preload-604182            kube-system
	5c38e1f4f235c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      46 seconds ago      Running             etcd                      0                   93ccefedd49fa       etcd-no-preload-604182                      kube-system
	6aef135ba3882       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      46 seconds ago      Running             kube-controller-manager   0                   cfc48569ff06d       kube-controller-manager-no-preload-604182   kube-system
	31c7565b71210       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      46 seconds ago      Running             kube-apiserver            0                   88523de7b29c4       kube-apiserver-no-preload-604182            kube-system
	
	
	==> coredns [2ac01c0607af4ed57a984e5bcef75eb580a4ddf396d3004531ab44fb2e66e7c4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40188 - 3448 "HINFO IN 7763319235365477239.1669197982135159556. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012781828s
	
	
	==> describe nodes <==
	Name:               no-preload-604182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-604182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=no-preload-604182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_02_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-604182
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:02:51 +0000   Thu, 02 Oct 2025 08:02:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:02:51 +0000   Thu, 02 Oct 2025 08:02:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:02:51 +0000   Thu, 02 Oct 2025 08:02:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:02:51 +0000   Thu, 02 Oct 2025 08:02:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-604182
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56bf2277b15844b8a1d79be2d004daf2
	  System UUID:                65f354cd-b030-437d-9beb-12ea491c6172
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-74zfp                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-604182                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-5zjv7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-604182             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-604182    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-qn6pp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-604182             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node no-preload-604182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node no-preload-604182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node no-preload-604182 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-604182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-604182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-604182 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-604182 event: Registered Node no-preload-604182 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-604182 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 07:31] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5c38e1f4f235c8d17345e85c7c287cdf9227824661e6bc520cfaa8d904dcc6c9] <==
	{"level":"warn","ts":"2025-10-02T08:02:14.763613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:14.819975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:14.879278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:14.909689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:14.951322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:14.993329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.015351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.058473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.136183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.248053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.283984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.316046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.419789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.447368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.535314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.594429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.689141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.723016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.766775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.904711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.938192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:15.980871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:16.034499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:16.075225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:16.281902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38590","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:02:55 up  2:45,  0 user,  load average: 5.65, 2.88, 2.08
	Linux no-preload-604182 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ac9fbce67c319814c78143010cceea0e6aeff77ef656015bb93409d6705bad91] <==
	I1002 08:02:31.905511       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:02:31.907695       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 08:02:31.907891       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:02:31.907956       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:02:31.908004       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:02:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:02:32.205851       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:02:32.207598       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:02:32.212715       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:02:32.213426       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 08:02:32.399263       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:02:32.399388       1 metrics.go:72] Registering metrics
	I1002 08:02:32.399491       1 controller.go:711] "Syncing nftables rules"
	I1002 08:02:42.212086       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:02:42.212154       1 main.go:301] handling current node
	I1002 08:02:52.204933       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:02:52.204977       1 main.go:301] handling current node
	
	
	==> kube-apiserver [31c7565b71210ef3ce1ccdd3afd158405e4e7992aca14e355b068f476f821624] <==
	I1002 08:02:18.170770       1 cache.go:39] Caches are synced for autoregister controller
	I1002 08:02:18.170975       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 08:02:18.176115       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:02:18.200931       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:02:18.210964       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:02:18.220852       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 08:02:18.307952       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:02:18.315438       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 08:02:18.682560       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 08:02:18.701904       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 08:02:18.701993       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:02:19.734899       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:02:19.822312       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:02:19.956116       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:02:19.989436       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 08:02:20.019268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 08:02:20.020708       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:02:20.033106       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:02:20.744680       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:02:20.788446       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 08:02:20.816013       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 08:02:25.350826       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 08:02:25.496099       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1002 08:02:26.043144       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:02:26.123780       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6aef135ba3882f210b134ae6b7699f86d8050b8bc6cb9b4c565df4b8d2ee00ea] <==
	I1002 08:02:25.177005       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 08:02:25.179155       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 08:02:25.179209       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 08:02:25.179365       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 08:02:25.179446       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-604182"
	I1002 08:02:25.179494       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 08:02:25.179661       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 08:02:25.179708       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 08:02:25.179866       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:02:25.180283       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 08:02:25.182903       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-604182" podCIDRs=["10.244.0.0/24"]
	I1002 08:02:25.182960       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 08:02:25.183019       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 08:02:25.184877       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 08:02:25.185060       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 08:02:25.185118       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 08:02:25.189235       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 08:02:25.194828       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 08:02:25.195565       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 08:02:25.198628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 08:02:25.231300       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:02:25.235222       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:02:25.235251       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 08:02:25.235260       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 08:02:45.187790       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [80255440c9e6a2f1047c54095e8e2de9d8838d30dcaa0390cd4a1be1c8f13513] <==
	I1002 08:02:26.842298       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:02:26.977828       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:02:27.087203       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:02:27.087246       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 08:02:27.087326       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:02:27.224329       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:02:27.224384       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:02:27.233946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:02:27.234277       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:02:27.234294       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:02:27.235823       1 config.go:200] "Starting service config controller"
	I1002 08:02:27.235834       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:02:27.235849       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:02:27.235853       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:02:27.235864       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:02:27.235868       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:02:27.236443       1 config.go:309] "Starting node config controller"
	I1002 08:02:27.236450       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:02:27.236455       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:02:27.339178       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:02:27.339212       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:02:27.339246       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [31139242ede696104bce690822907a456a5fbc57700dd24e46c5353993cd70e3] <==
	E1002 08:02:18.291341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 08:02:18.299489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 08:02:18.299672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 08:02:18.299773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 08:02:18.299884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 08:02:18.299979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 08:02:18.300065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 08:02:18.300150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 08:02:18.300237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 08:02:18.300326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 08:02:18.300415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 08:02:18.300505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 08:02:18.300647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 08:02:18.300787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 08:02:18.300853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 08:02:18.305710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 08:02:19.122223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 08:02:19.122380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 08:02:19.135075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 08:02:19.165897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 08:02:19.223376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 08:02:19.336580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 08:02:19.422425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 08:02:19.773987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 08:02:22.792288       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.247866    2003 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.249214    2003 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.597738    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe309cb8-ddea-4301-a231-bf301f3e25d6-kube-proxy\") pod \"kube-proxy-qn6pp\" (UID: \"fe309cb8-ddea-4301-a231-bf301f3e25d6\") " pod="kube-system/kube-proxy-qn6pp"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.597830    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/578c1406-9933-4a37-9826-4f696b5a3e38-xtables-lock\") pod \"kindnet-5zjv7\" (UID: \"578c1406-9933-4a37-9826-4f696b5a3e38\") " pod="kube-system/kindnet-5zjv7"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.597853    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe309cb8-ddea-4301-a231-bf301f3e25d6-xtables-lock\") pod \"kube-proxy-qn6pp\" (UID: \"fe309cb8-ddea-4301-a231-bf301f3e25d6\") " pod="kube-system/kube-proxy-qn6pp"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.597870    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe309cb8-ddea-4301-a231-bf301f3e25d6-lib-modules\") pod \"kube-proxy-qn6pp\" (UID: \"fe309cb8-ddea-4301-a231-bf301f3e25d6\") " pod="kube-system/kube-proxy-qn6pp"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.597919    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/578c1406-9933-4a37-9826-4f696b5a3e38-lib-modules\") pod \"kindnet-5zjv7\" (UID: \"578c1406-9933-4a37-9826-4f696b5a3e38\") " pod="kube-system/kindnet-5zjv7"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.597942    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/578c1406-9933-4a37-9826-4f696b5a3e38-cni-cfg\") pod \"kindnet-5zjv7\" (UID: \"578c1406-9933-4a37-9826-4f696b5a3e38\") " pod="kube-system/kindnet-5zjv7"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.597996    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knzw7\" (UniqueName: \"kubernetes.io/projected/578c1406-9933-4a37-9826-4f696b5a3e38-kube-api-access-knzw7\") pod \"kindnet-5zjv7\" (UID: \"578c1406-9933-4a37-9826-4f696b5a3e38\") " pod="kube-system/kindnet-5zjv7"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.598022    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvl48\" (UniqueName: \"kubernetes.io/projected/fe309cb8-ddea-4301-a231-bf301f3e25d6-kube-api-access-bvl48\") pod \"kube-proxy-qn6pp\" (UID: \"fe309cb8-ddea-4301-a231-bf301f3e25d6\") " pod="kube-system/kube-proxy-qn6pp"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: I1002 08:02:25.792340    2003 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 08:02:25 no-preload-604182 kubelet[2003]: W1002 08:02:25.930538    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/crio-6b1958af3b867c81eeace282076c0758cfacb6c414fcf655a4c4c2f3f8051b6c WatchSource:0}: Error finding container 6b1958af3b867c81eeace282076c0758cfacb6c414fcf655a4c4c2f3f8051b6c: Status 404 returned error can't find the container with id 6b1958af3b867c81eeace282076c0758cfacb6c414fcf655a4c4c2f3f8051b6c
	Oct 02 08:02:26 no-preload-604182 kubelet[2003]: W1002 08:02:26.351424    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/crio-2effd00779bf84e1e00e0609b363f2e8c8f13653af3c33c4579ba981ccce0792 WatchSource:0}: Error finding container 2effd00779bf84e1e00e0609b363f2e8c8f13653af3c33c4579ba981ccce0792: Status 404 returned error can't find the container with id 2effd00779bf84e1e00e0609b363f2e8c8f13653af3c33c4579ba981ccce0792
	Oct 02 08:02:26 no-preload-604182 kubelet[2003]: W1002 08:02:26.375462    2003 watcher.go:93] Error while processing event ("/sys/fs/cgroup/blkio/docker/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/crio-2effd00779bf84e1e00e0609b363f2e8c8f13653af3c33c4579ba981ccce0792": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/docker/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/crio-2effd00779bf84e1e00e0609b363f2e8c8f13653af3c33c4579ba981ccce0792: no such file or directory
	Oct 02 08:02:27 no-preload-604182 kubelet[2003]: I1002 08:02:27.740509    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qn6pp" podStartSLOduration=2.740488094 podStartE2EDuration="2.740488094s" podCreationTimestamp="2025-10-02 08:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:02:27.353826224 +0000 UTC m=+6.683191122" watchObservedRunningTime="2025-10-02 08:02:27.740488094 +0000 UTC m=+7.069853009"
	Oct 02 08:02:32 no-preload-604182 kubelet[2003]: I1002 08:02:32.364810    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5zjv7" podStartSLOduration=2.011930158 podStartE2EDuration="7.364783712s" podCreationTimestamp="2025-10-02 08:02:25 +0000 UTC" firstStartedPulling="2025-10-02 08:02:26.415463076 +0000 UTC m=+5.744827966" lastFinishedPulling="2025-10-02 08:02:31.76831663 +0000 UTC m=+11.097681520" observedRunningTime="2025-10-02 08:02:32.364368381 +0000 UTC m=+11.693733271" watchObservedRunningTime="2025-10-02 08:02:32.364783712 +0000 UTC m=+11.694148610"
	Oct 02 08:02:42 no-preload-604182 kubelet[2003]: I1002 08:02:42.239766    2003 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 08:02:42 no-preload-604182 kubelet[2003]: I1002 08:02:42.434690    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf7gm\" (UniqueName: \"kubernetes.io/projected/323f91d2-af51-47c9-8da6-0768f4dc30ab-kube-api-access-zf7gm\") pod \"storage-provisioner\" (UID: \"323f91d2-af51-47c9-8da6-0768f4dc30ab\") " pod="kube-system/storage-provisioner"
	Oct 02 08:02:42 no-preload-604182 kubelet[2003]: I1002 08:02:42.434754    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0aa93160-9105-470c-b62f-c8d0949da486-config-volume\") pod \"coredns-66bc5c9577-74zfp\" (UID: \"0aa93160-9105-470c-b62f-c8d0949da486\") " pod="kube-system/coredns-66bc5c9577-74zfp"
	Oct 02 08:02:42 no-preload-604182 kubelet[2003]: I1002 08:02:42.434815    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db722\" (UniqueName: \"kubernetes.io/projected/0aa93160-9105-470c-b62f-c8d0949da486-kube-api-access-db722\") pod \"coredns-66bc5c9577-74zfp\" (UID: \"0aa93160-9105-470c-b62f-c8d0949da486\") " pod="kube-system/coredns-66bc5c9577-74zfp"
	Oct 02 08:02:42 no-preload-604182 kubelet[2003]: I1002 08:02:42.434840    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/323f91d2-af51-47c9-8da6-0768f4dc30ab-tmp\") pod \"storage-provisioner\" (UID: \"323f91d2-af51-47c9-8da6-0768f4dc30ab\") " pod="kube-system/storage-provisioner"
	Oct 02 08:02:43 no-preload-604182 kubelet[2003]: I1002 08:02:43.411643    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-74zfp" podStartSLOduration=17.411613843 podStartE2EDuration="17.411613843s" podCreationTimestamp="2025-10-02 08:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:02:43.382702184 +0000 UTC m=+22.712067082" watchObservedRunningTime="2025-10-02 08:02:43.411613843 +0000 UTC m=+22.740978733"
	Oct 02 08:02:45 no-preload-604182 kubelet[2003]: I1002 08:02:45.609704    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.609686728 podStartE2EDuration="17.609686728s" podCreationTimestamp="2025-10-02 08:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:02:43.436305393 +0000 UTC m=+22.765670291" watchObservedRunningTime="2025-10-02 08:02:45.609686728 +0000 UTC m=+24.939051617"
	Oct 02 08:02:45 no-preload-604182 kubelet[2003]: I1002 08:02:45.762754    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqbgp\" (UniqueName: \"kubernetes.io/projected/79649b38-6d08-4670-b939-ea8b9b38a4ad-kube-api-access-hqbgp\") pod \"busybox\" (UID: \"79649b38-6d08-4670-b939-ea8b9b38a4ad\") " pod="default/busybox"
	Oct 02 08:02:45 no-preload-604182 kubelet[2003]: W1002 08:02:45.939762    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/crio-596d73b58721f134afe5eeb58069d3e15a66db95467d9ba291f53fd93ef00716 WatchSource:0}: Error finding container 596d73b58721f134afe5eeb58069d3e15a66db95467d9ba291f53fd93ef00716: Status 404 returned error can't find the container with id 596d73b58721f134afe5eeb58069d3e15a66db95467d9ba291f53fd93ef00716
	
	
	==> storage-provisioner [9c6fa9eb18d946b626d5751f602967539dfaf63290dd09747b4f28d7993e6fcb] <==
	I1002 08:02:42.702009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 08:02:42.724008       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:02:42.724134       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 08:02:42.728640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:42.743879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:02:42.744118       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:02:42.744341       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-604182_6005b136-4663-4c6e-9d2f-976aa165d092!
	I1002 08:02:42.748725       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce44b2f7-3b72-4264-8678-b29a955c98d3", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-604182_6005b136-4663-4c6e-9d2f-976aa165d092 became leader
	W1002 08:02:42.750184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:42.758228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:02:42.845876       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-604182_6005b136-4663-4c6e-9d2f-976aa165d092!
	W1002 08:02:44.760937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:44.767780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:46.771587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:46.779941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:48.783300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:48.792079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:50.796282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:50.801142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:52.805329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:52.810227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:54.813850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:02:54.818693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-604182 -n no-preload-604182
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-604182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (296.700464ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:03:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-171347 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-171347 describe deploy/metrics-server -n kube-system: exit status 1 (158.347904ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-171347 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-171347
helpers_test.go:243: (dbg) docker inspect embed-certs-171347:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa",
	        "Created": "2025-10-02T08:02:00.578455149Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 491814,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:02:00.681098546Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/hostname",
	        "HostsPath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/hosts",
	        "LogPath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa-json.log",
	        "Name": "/embed-certs-171347",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-171347:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-171347",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa",
	                "LowerDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-171347",
	                "Source": "/var/lib/docker/volumes/embed-certs-171347/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-171347",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-171347",
	                "name.minikube.sigs.k8s.io": "embed-certs-171347",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ea30337865eb1da755fb071e288bb92c2885a702d1b7d38039b45b22fdeccca",
	            "SandboxKey": "/var/run/docker/netns/3ea30337865e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-171347": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:5e:46:5e:d0:82",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02e39ca8e594ec82c902deecf74b9a14d44881e9835232c2f729a3d1bc104bcc",
	                    "EndpointID": "e1082f208522bb4f1d921a9326147a28d639bfc56342ed62868228248364e09e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-171347",
	                        "411784c5c3f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171347 -n embed-certs-171347
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-171347 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-171347 logs -n 25: (1.588827044s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-275910                                                                                                                                                                                                                  │ force-systemd-flag-275910 │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:56 UTC │ 02 Oct 25 07:56 UTC │
	│ delete  │ -p force-systemd-env-297062                                                                                                                                                                                                                   │ force-systemd-env-297062  │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p cert-options-654417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ cert-options-654417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ ssh     │ -p cert-options-654417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ delete  │ -p cert-options-654417                                                                                                                                                                                                                        │ cert-options-654417       │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:59 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 08:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │                     │
	│ stop    │ -p old-k8s-version-356986 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-356986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:01 UTC │
	│ image   │ old-k8s-version-356986 image list --format=json                                                                                                                                                                                               │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ pause   │ -p old-k8s-version-356986 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │                     │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182         │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:02 UTC │
	│ delete  │ -p cert-expiration-759246                                                                                                                                                                                                                     │ cert-expiration-759246    │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347        │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-604182         │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │                     │
	│ stop    │ -p no-preload-604182 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-604182         │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p no-preload-604182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-604182         │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182         │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-171347        │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:03:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:03:08.545409  495337 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:03:08.545610  495337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:03:08.545640  495337 out.go:374] Setting ErrFile to fd 2...
	I1002 08:03:08.545660  495337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:03:08.545988  495337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:03:08.546441  495337 out.go:368] Setting JSON to false
	I1002 08:03:08.547520  495337 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9940,"bootTime":1759382249,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:03:08.547625  495337 start.go:140] virtualization:  
	I1002 08:03:08.552775  495337 out.go:179] * [no-preload-604182] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:03:08.555913  495337 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:03:08.556084  495337 notify.go:220] Checking for updates...
	I1002 08:03:08.561703  495337 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:03:08.564600  495337 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:03:08.567409  495337 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:03:08.570301  495337 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:03:08.573688  495337 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:03:08.576962  495337 config.go:182] Loaded profile config "no-preload-604182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:03:08.577517  495337 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:03:08.609213  495337 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:03:08.609341  495337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:03:08.681723  495337 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:03:08.670607462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:03:08.681840  495337 docker.go:318] overlay module found
	I1002 08:03:08.684989  495337 out.go:179] * Using the docker driver based on existing profile
	I1002 08:03:08.687927  495337 start.go:304] selected driver: docker
	I1002 08:03:08.687959  495337 start.go:924] validating driver "docker" against &{Name:no-preload-604182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-604182 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:03:08.688074  495337 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:03:08.688896  495337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:03:08.745890  495337 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:03:08.735492753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:03:08.746293  495337 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:03:08.746326  495337 cni.go:84] Creating CNI manager for ""
	I1002 08:03:08.746398  495337 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:03:08.746444  495337 start.go:348] cluster config:
	{Name:no-preload-604182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-604182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:03:08.751369  495337 out.go:179] * Starting "no-preload-604182" primary control-plane node in "no-preload-604182" cluster
	I1002 08:03:08.754202  495337 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:03:08.757063  495337 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:03:08.759950  495337 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:03:08.760033  495337 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:03:08.760104  495337 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/config.json ...
	I1002 08:03:08.760379  495337 cache.go:107] acquiring lock: {Name:mk6201e00fe8824949f6f5208e56eaa0a0dbce5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:08.760469  495337 cache.go:115] /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 08:03:08.760481  495337 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.789µs
	I1002 08:03:08.760495  495337 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 08:03:08.760507  495337 cache.go:107] acquiring lock: {Name:mk4f22c8113378a6335e73ec712ef29cecea809b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:08.760608  495337 cache.go:115] /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1002 08:03:08.760624  495337 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 118.688µs
	I1002 08:03:08.760633  495337 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1002 08:03:08.760665  495337 cache.go:107] acquiring lock: {Name:mkba38926bd1f013acdacf3513472887554833c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:08.760703  495337 cache.go:115] /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1002 08:03:08.760709  495337 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 47.845µs
	I1002 08:03:08.760715  495337 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1002 08:03:08.760724  495337 cache.go:107] acquiring lock: {Name:mkee1d77fb0477b011ee7fe0f893ca44c93542dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:08.760719  495337 cache.go:107] acquiring lock: {Name:mk9d9ba11e07307e731e690cf81cdf36e43fafba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:08.760752  495337 cache.go:115] /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1002 08:03:08.760758  495337 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 34.659µs
	I1002 08:03:08.760763  495337 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1002 08:03:08.760786  495337 cache.go:115] /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1002 08:03:08.760793  495337 cache.go:107] acquiring lock: {Name:mk170ad1200d98dd424da5ed02407a1a36017c78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:08.760795  495337 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 90.011µs
	I1002 08:03:08.760808  495337 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1002 08:03:08.760827  495337 cache.go:115] /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1002 08:03:08.760822  495337 cache.go:107] acquiring lock: {Name:mk5e6d2b14d03b631fd293aeea379ed9eba3063c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:08.760833  495337 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 41.797µs
	I1002 08:03:08.760840  495337 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1002 08:03:08.760854  495337 cache.go:115] /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1002 08:03:08.760860  495337 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.296µs
	I1002 08:03:08.760866  495337 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1002 08:03:08.760856  495337 cache.go:107] acquiring lock: {Name:mk68e668e04cb544e1e9c34f002697d9ed6432a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:08.760905  495337 cache.go:115] /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1002 08:03:08.760911  495337 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 56.378µs
	I1002 08:03:08.760917  495337 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1002 08:03:08.760931  495337 cache.go:87] Successfully saved all images to host disk.
	I1002 08:03:08.781274  495337 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:03:08.781302  495337 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:03:08.781316  495337 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:03:08.781347  495337 start.go:360] acquireMachinesLock for no-preload-604182: {Name:mkd05bfc392a01ec6ccaa3b682e582b153f09bfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:08.781407  495337 start.go:364] duration metric: took 38.959µs to acquireMachinesLock for "no-preload-604182"
	I1002 08:03:08.781433  495337 start.go:96] Skipping create...Using existing machine configuration
	I1002 08:03:08.781445  495337 fix.go:54] fixHost starting: 
	I1002 08:03:08.781721  495337 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:03:08.799528  495337 fix.go:112] recreateIfNeeded on no-preload-604182: state=Stopped err=<nil>
	W1002 08:03:08.799565  495337 fix.go:138] unexpected machine state, will restart: <nil>
	W1002 08:03:08.628601  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	W1002 08:03:11.128718  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	I1002 08:03:08.802975  495337 out.go:252] * Restarting existing docker container for "no-preload-604182" ...
	I1002 08:03:08.803124  495337 cli_runner.go:164] Run: docker start no-preload-604182
	I1002 08:03:09.078968  495337 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:03:09.098321  495337 kic.go:430] container "no-preload-604182" state is running.
	I1002 08:03:09.098733  495337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-604182
	I1002 08:03:09.128385  495337 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/config.json ...
	I1002 08:03:09.128614  495337 machine.go:93] provisionDockerMachine start ...
	I1002 08:03:09.128674  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:09.150720  495337 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:09.151282  495337 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1002 08:03:09.151300  495337 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:03:09.153217  495337 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47832->127.0.0.1:33418: read: connection reset by peer
	I1002 08:03:12.290594  495337 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-604182
	
	I1002 08:03:12.290620  495337 ubuntu.go:182] provisioning hostname "no-preload-604182"
	I1002 08:03:12.290710  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:12.311029  495337 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:12.311385  495337 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1002 08:03:12.311406  495337 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-604182 && echo "no-preload-604182" | sudo tee /etc/hostname
	I1002 08:03:12.452879  495337 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-604182
	
	I1002 08:03:12.452963  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:12.472486  495337 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:12.472802  495337 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1002 08:03:12.472826  495337 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-604182' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-604182/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-604182' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:03:12.607265  495337 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:03:12.607302  495337 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:03:12.607329  495337 ubuntu.go:190] setting up certificates
	I1002 08:03:12.607338  495337 provision.go:84] configureAuth start
	I1002 08:03:12.607402  495337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-604182
	I1002 08:03:12.624777  495337 provision.go:143] copyHostCerts
	I1002 08:03:12.624852  495337 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:03:12.624866  495337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:03:12.624949  495337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:03:12.625064  495337 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:03:12.625075  495337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:03:12.625105  495337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:03:12.625170  495337 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:03:12.625180  495337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:03:12.625204  495337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:03:12.625265  495337 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.no-preload-604182 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-604182]
	I1002 08:03:12.743456  495337 provision.go:177] copyRemoteCerts
	I1002 08:03:12.743560  495337 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:03:12.743621  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:12.760688  495337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:03:12.859443  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:03:12.878146  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 08:03:12.898092  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 08:03:12.917368  495337 provision.go:87] duration metric: took 310.003363ms to configureAuth
	I1002 08:03:12.917396  495337 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:03:12.917615  495337 config.go:182] Loaded profile config "no-preload-604182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:03:12.917733  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:12.934953  495337 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:12.935313  495337 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1002 08:03:12.935337  495337 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:03:13.254649  495337 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:03:13.254675  495337 machine.go:96] duration metric: took 4.126051438s to provisionDockerMachine
	I1002 08:03:13.254686  495337 start.go:293] postStartSetup for "no-preload-604182" (driver="docker")
	I1002 08:03:13.254707  495337 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:03:13.254770  495337 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:03:13.254826  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:13.277115  495337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:03:13.375796  495337 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:03:13.379277  495337 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:03:13.379307  495337 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:03:13.379319  495337 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:03:13.379377  495337 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:03:13.379461  495337 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:03:13.379568  495337 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:03:13.387748  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:03:13.409179  495337 start.go:296] duration metric: took 154.466908ms for postStartSetup
	I1002 08:03:13.409265  495337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:03:13.409329  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:13.427481  495337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:03:13.520266  495337 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:03:13.525332  495337 fix.go:56] duration metric: took 4.743885779s for fixHost
	I1002 08:03:13.525360  495337 start.go:83] releasing machines lock for "no-preload-604182", held for 4.74393931s
	I1002 08:03:13.525429  495337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-604182
	I1002 08:03:13.542089  495337 ssh_runner.go:195] Run: cat /version.json
	I1002 08:03:13.542157  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:13.542346  495337 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:03:13.542397  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:13.561975  495337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:03:13.564808  495337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:03:13.762580  495337 ssh_runner.go:195] Run: systemctl --version
	I1002 08:03:13.769118  495337 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:03:13.808080  495337 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:03:13.812705  495337 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:03:13.812835  495337 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:03:13.821101  495337 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 08:03:13.821129  495337 start.go:495] detecting cgroup driver to use...
	I1002 08:03:13.821193  495337 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:03:13.821273  495337 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:03:13.837403  495337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:03:13.850597  495337 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:03:13.850673  495337 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:03:13.866866  495337 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:03:13.880841  495337 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:03:14.006418  495337 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:03:14.146955  495337 docker.go:234] disabling docker service ...
	I1002 08:03:14.147027  495337 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:03:14.164730  495337 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:03:14.179297  495337 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:03:14.316596  495337 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:03:14.439751  495337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:03:14.453057  495337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:03:14.467888  495337 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:03:14.467954  495337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:14.476926  495337 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:03:14.476997  495337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:14.486228  495337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:14.495306  495337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:14.504028  495337 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:03:14.512591  495337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:14.521960  495337 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:14.530762  495337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:14.539972  495337 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:03:14.547367  495337 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:03:14.554689  495337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:03:14.681563  495337 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:03:14.834615  495337 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:03:14.834754  495337 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:03:14.839425  495337 start.go:563] Will wait 60s for crictl version
	I1002 08:03:14.839524  495337 ssh_runner.go:195] Run: which crictl
	I1002 08:03:14.843628  495337 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:03:14.869920  495337 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:03:14.870061  495337 ssh_runner.go:195] Run: crio --version
	I1002 08:03:14.904329  495337 ssh_runner.go:195] Run: crio --version
	I1002 08:03:14.939306  495337 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 08:03:14.942199  495337 cli_runner.go:164] Run: docker network inspect no-preload-604182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:03:14.960825  495337 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 08:03:14.965154  495337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:03:14.977308  495337 kubeadm.go:883] updating cluster {Name:no-preload-604182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-604182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:03:14.977420  495337 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:03:14.977463  495337 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:03:15.052525  495337 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:03:15.052549  495337 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:03:15.052557  495337 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 08:03:15.052665  495337 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-604182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-604182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:03:15.052754  495337 ssh_runner.go:195] Run: crio config
	I1002 08:03:15.135722  495337 cni.go:84] Creating CNI manager for ""
	I1002 08:03:15.135748  495337 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:03:15.135765  495337 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:03:15.135795  495337 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-604182 NodeName:no-preload-604182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:03:15.135936  495337 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-604182"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:03:15.136009  495337 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:03:15.145723  495337 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:03:15.145807  495337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:03:15.156338  495337 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 08:03:15.169925  495337 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:03:15.183316  495337 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 08:03:15.197221  495337 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:03:15.201478  495337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:03:15.211975  495337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:03:15.335231  495337 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:03:15.354718  495337 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182 for IP: 192.168.76.2
	I1002 08:03:15.354751  495337 certs.go:195] generating shared ca certs ...
	I1002 08:03:15.354768  495337 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:15.354929  495337 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:03:15.354988  495337 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:03:15.355001  495337 certs.go:257] generating profile certs ...
	I1002 08:03:15.355123  495337 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.key
	I1002 08:03:15.355207  495337 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.key.e3932ce3
	I1002 08:03:15.355269  495337 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.key
	I1002 08:03:15.355387  495337 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:03:15.355432  495337 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:03:15.355446  495337 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:03:15.355491  495337 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:03:15.355517  495337 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:03:15.355561  495337 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:03:15.355614  495337 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:03:15.356275  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:03:15.376239  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:03:15.396092  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:03:15.417097  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:03:15.440039  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 08:03:15.468837  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:03:15.492362  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:03:15.517051  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 08:03:15.543245  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:03:15.565013  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:03:15.590372  495337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:03:15.612730  495337 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:03:15.628277  495337 ssh_runner.go:195] Run: openssl version
	I1002 08:03:15.637664  495337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:03:15.649309  495337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:03:15.655067  495337 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:03:15.655169  495337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:03:15.704838  495337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:03:15.714346  495337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:03:15.723949  495337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:03:15.727769  495337 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:03:15.727837  495337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:03:15.769218  495337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:03:15.777465  495337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:03:15.786196  495337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:03:15.790105  495337 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:03:15.790183  495337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:03:15.832816  495337 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:03:15.842066  495337 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:03:15.847147  495337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 08:03:15.889053  495337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 08:03:15.932597  495337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 08:03:15.976556  495337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 08:03:16.036470  495337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 08:03:16.112059  495337 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 08:03:16.200760  495337 kubeadm.go:400] StartCluster: {Name:no-preload-604182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-604182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:03:16.200920  495337 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:03:16.201042  495337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:03:16.255914  495337 cri.go:89] found id: "3e1fc7a1946e3a39d39fe7e56e659a01f9a77a1b064829ae68f8e7533e1798bc"
	I1002 08:03:16.256002  495337 cri.go:89] found id: "77029f6aa5b6233463612c47bb436aebdb6578cbd16ee091398e61c2c07d4608"
	I1002 08:03:16.256025  495337 cri.go:89] found id: "3094807a90d6dcd41655425e2f8000995d5181c4b8e85810c853b4db03b96dc4"
	I1002 08:03:16.256060  495337 cri.go:89] found id: ""
	I1002 08:03:16.256155  495337 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 08:03:16.277560  495337 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:03:16Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:03:16.277712  495337 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:03:16.293013  495337 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 08:03:16.293097  495337 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 08:03:16.293188  495337 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 08:03:16.308352  495337 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 08:03:16.309399  495337 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-604182" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:03:16.310135  495337 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-604182" cluster setting kubeconfig missing "no-preload-604182" context setting]
	I1002 08:03:16.312078  495337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:16.314493  495337 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 08:03:16.338835  495337 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 08:03:16.338921  495337 kubeadm.go:601] duration metric: took 45.803228ms to restartPrimaryControlPlane
	I1002 08:03:16.338946  495337 kubeadm.go:402] duration metric: took 138.230565ms to StartCluster
	I1002 08:03:16.338996  495337 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:16.339157  495337 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:03:16.341102  495337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:16.341479  495337 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:03:16.341681  495337 config.go:182] Loaded profile config "no-preload-604182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:03:16.341728  495337 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:03:16.341794  495337 addons.go:69] Setting storage-provisioner=true in profile "no-preload-604182"
	I1002 08:03:16.341820  495337 addons.go:238] Setting addon storage-provisioner=true in "no-preload-604182"
	W1002 08:03:16.341828  495337 addons.go:247] addon storage-provisioner should already be in state true
	I1002 08:03:16.341850  495337 host.go:66] Checking if "no-preload-604182" exists ...
	I1002 08:03:16.341955  495337 addons.go:69] Setting dashboard=true in profile "no-preload-604182"
	I1002 08:03:16.342020  495337 addons.go:238] Setting addon dashboard=true in "no-preload-604182"
	W1002 08:03:16.342046  495337 addons.go:247] addon dashboard should already be in state true
	I1002 08:03:16.342100  495337 host.go:66] Checking if "no-preload-604182" exists ...
	I1002 08:03:16.342304  495337 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:03:16.342756  495337 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:03:16.343297  495337 addons.go:69] Setting default-storageclass=true in profile "no-preload-604182"
	I1002 08:03:16.343329  495337 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-604182"
	I1002 08:03:16.343627  495337 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:03:16.346438  495337 out.go:179] * Verifying Kubernetes components...
	I1002 08:03:16.352639  495337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:03:16.389571  495337 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 08:03:16.393253  495337 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 08:03:16.396194  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 08:03:16.396223  495337 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 08:03:16.396299  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:16.407460  495337 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1002 08:03:13.627889  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	W1002 08:03:15.628944  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	W1002 08:03:18.127787  491185 node_ready.go:57] node "embed-certs-171347" has "Ready":"False" status (will retry)
	I1002 08:03:16.411204  495337 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:03:16.411229  495337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:03:16.411295  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:16.411964  495337 addons.go:238] Setting addon default-storageclass=true in "no-preload-604182"
	W1002 08:03:16.411986  495337 addons.go:247] addon default-storageclass should already be in state true
	I1002 08:03:16.412013  495337 host.go:66] Checking if "no-preload-604182" exists ...
	I1002 08:03:16.412429  495337 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:03:16.432130  495337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:03:16.461840  495337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:03:16.477017  495337 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:03:16.477040  495337 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:03:16.477105  495337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:03:16.506729  495337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:03:16.657060  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 08:03:16.657083  495337 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 08:03:16.710594  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 08:03:16.710675  495337 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 08:03:16.743062  495337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:03:16.745617  495337 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:03:16.782547  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 08:03:16.782614  495337 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 08:03:16.828661  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 08:03:16.828733  495337 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 08:03:16.873009  495337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:03:16.875466  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 08:03:16.875537  495337 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 08:03:16.952746  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 08:03:16.952822  495337 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 08:03:16.995990  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 08:03:16.996073  495337 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 08:03:17.050489  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 08:03:17.050564  495337 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 08:03:17.080929  495337 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:03:17.080999  495337 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 08:03:17.111067  495337 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:03:19.628477  491185 node_ready.go:49] node "embed-certs-171347" is "Ready"
	I1002 08:03:19.628512  491185 node_ready.go:38] duration metric: took 42.003931155s for node "embed-certs-171347" to be "Ready" ...
	I1002 08:03:19.628528  491185 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:03:19.628591  491185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:03:19.641883  491185 api_server.go:72] duration metric: took 42.970500333s to wait for apiserver process to appear ...
	I1002 08:03:19.641913  491185 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:03:19.641933  491185 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 08:03:19.656037  491185 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 08:03:19.657720  491185 api_server.go:141] control plane version: v1.34.1
	I1002 08:03:19.657751  491185 api_server.go:131] duration metric: took 15.831005ms to wait for apiserver health ...
	I1002 08:03:19.657761  491185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:03:19.665805  491185 system_pods.go:59] 8 kube-system pods found
	I1002 08:03:19.665847  491185 system_pods.go:61] "coredns-66bc5c9577-h88d8" [2f1ec40b-c756-4c21-b68c-293d99715917] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:03:19.665856  491185 system_pods.go:61] "etcd-embed-certs-171347" [926ce91c-0431-4ba1-b44e-fffbf0775a3b] Running
	I1002 08:03:19.665862  491185 system_pods.go:61] "kindnet-q6rpr" [debb56b0-5037-4c8f-83f9-277929580103] Running
	I1002 08:03:19.665868  491185 system_pods.go:61] "kube-apiserver-embed-certs-171347" [e47c2d75-962d-4fcc-b386-ca8894e72519] Running
	I1002 08:03:19.665873  491185 system_pods.go:61] "kube-controller-manager-embed-certs-171347" [d161f53c-5955-4fee-b51b-766596a6970c] Running
	I1002 08:03:19.665880  491185 system_pods.go:61] "kube-proxy-jzmxf" [0bb71089-73b5-4b6c-92cd-0c4ba1aee456] Running
	I1002 08:03:19.665886  491185 system_pods.go:61] "kube-scheduler-embed-certs-171347" [8fbc6745-47c9-43ca-af46-4746f82e41f3] Running
	I1002 08:03:19.665893  491185 system_pods.go:61] "storage-provisioner" [b206ffb9-0004-486d-98ff-d23a63b69555] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:03:19.665906  491185 system_pods.go:74] duration metric: took 8.139144ms to wait for pod list to return data ...
	I1002 08:03:19.665915  491185 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:03:19.679703  491185 default_sa.go:45] found service account: "default"
	I1002 08:03:19.679734  491185 default_sa.go:55] duration metric: took 13.807167ms for default service account to be created ...
	I1002 08:03:19.679744  491185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:03:19.683958  491185 system_pods.go:86] 8 kube-system pods found
	I1002 08:03:19.683994  491185 system_pods.go:89] "coredns-66bc5c9577-h88d8" [2f1ec40b-c756-4c21-b68c-293d99715917] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:03:19.684005  491185 system_pods.go:89] "etcd-embed-certs-171347" [926ce91c-0431-4ba1-b44e-fffbf0775a3b] Running
	I1002 08:03:19.684012  491185 system_pods.go:89] "kindnet-q6rpr" [debb56b0-5037-4c8f-83f9-277929580103] Running
	I1002 08:03:19.684017  491185 system_pods.go:89] "kube-apiserver-embed-certs-171347" [e47c2d75-962d-4fcc-b386-ca8894e72519] Running
	I1002 08:03:19.684023  491185 system_pods.go:89] "kube-controller-manager-embed-certs-171347" [d161f53c-5955-4fee-b51b-766596a6970c] Running
	I1002 08:03:19.684027  491185 system_pods.go:89] "kube-proxy-jzmxf" [0bb71089-73b5-4b6c-92cd-0c4ba1aee456] Running
	I1002 08:03:19.684032  491185 system_pods.go:89] "kube-scheduler-embed-certs-171347" [8fbc6745-47c9-43ca-af46-4746f82e41f3] Running
	I1002 08:03:19.684038  491185 system_pods.go:89] "storage-provisioner" [b206ffb9-0004-486d-98ff-d23a63b69555] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:03:19.684063  491185 retry.go:31] will retry after 246.318375ms: missing components: kube-dns
	I1002 08:03:19.945098  491185 system_pods.go:86] 8 kube-system pods found
	I1002 08:03:19.945135  491185 system_pods.go:89] "coredns-66bc5c9577-h88d8" [2f1ec40b-c756-4c21-b68c-293d99715917] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:03:19.945145  491185 system_pods.go:89] "etcd-embed-certs-171347" [926ce91c-0431-4ba1-b44e-fffbf0775a3b] Running
	I1002 08:03:19.945151  491185 system_pods.go:89] "kindnet-q6rpr" [debb56b0-5037-4c8f-83f9-277929580103] Running
	I1002 08:03:19.945156  491185 system_pods.go:89] "kube-apiserver-embed-certs-171347" [e47c2d75-962d-4fcc-b386-ca8894e72519] Running
	I1002 08:03:19.945171  491185 system_pods.go:89] "kube-controller-manager-embed-certs-171347" [d161f53c-5955-4fee-b51b-766596a6970c] Running
	I1002 08:03:19.945179  491185 system_pods.go:89] "kube-proxy-jzmxf" [0bb71089-73b5-4b6c-92cd-0c4ba1aee456] Running
	I1002 08:03:19.945183  491185 system_pods.go:89] "kube-scheduler-embed-certs-171347" [8fbc6745-47c9-43ca-af46-4746f82e41f3] Running
	I1002 08:03:19.945189  491185 system_pods.go:89] "storage-provisioner" [b206ffb9-0004-486d-98ff-d23a63b69555] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:03:19.945210  491185 retry.go:31] will retry after 326.55233ms: missing components: kube-dns
	I1002 08:03:20.277605  491185 system_pods.go:86] 8 kube-system pods found
	I1002 08:03:20.277648  491185 system_pods.go:89] "coredns-66bc5c9577-h88d8" [2f1ec40b-c756-4c21-b68c-293d99715917] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:03:20.277659  491185 system_pods.go:89] "etcd-embed-certs-171347" [926ce91c-0431-4ba1-b44e-fffbf0775a3b] Running
	I1002 08:03:20.277674  491185 system_pods.go:89] "kindnet-q6rpr" [debb56b0-5037-4c8f-83f9-277929580103] Running
	I1002 08:03:20.277680  491185 system_pods.go:89] "kube-apiserver-embed-certs-171347" [e47c2d75-962d-4fcc-b386-ca8894e72519] Running
	I1002 08:03:20.277686  491185 system_pods.go:89] "kube-controller-manager-embed-certs-171347" [d161f53c-5955-4fee-b51b-766596a6970c] Running
	I1002 08:03:20.277690  491185 system_pods.go:89] "kube-proxy-jzmxf" [0bb71089-73b5-4b6c-92cd-0c4ba1aee456] Running
	I1002 08:03:20.277695  491185 system_pods.go:89] "kube-scheduler-embed-certs-171347" [8fbc6745-47c9-43ca-af46-4746f82e41f3] Running
	I1002 08:03:20.277703  491185 system_pods.go:89] "storage-provisioner" [b206ffb9-0004-486d-98ff-d23a63b69555] Running
	I1002 08:03:20.277712  491185 system_pods.go:126] duration metric: took 597.961528ms to wait for k8s-apps to be running ...
	I1002 08:03:20.277729  491185 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:03:20.277802  491185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:03:20.296635  491185 system_svc.go:56] duration metric: took 18.897922ms WaitForService to wait for kubelet
	I1002 08:03:20.296673  491185 kubeadm.go:586] duration metric: took 43.625295394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:03:20.296692  491185 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:03:20.302220  491185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:03:20.302257  491185 node_conditions.go:123] node cpu capacity is 2
	I1002 08:03:20.302281  491185 node_conditions.go:105] duration metric: took 5.573204ms to run NodePressure ...
	I1002 08:03:20.302294  491185 start.go:241] waiting for startup goroutines ...
	I1002 08:03:20.302302  491185 start.go:246] waiting for cluster config update ...
	I1002 08:03:20.302316  491185 start.go:255] writing updated cluster config ...
	I1002 08:03:20.302647  491185 ssh_runner.go:195] Run: rm -f paused
	I1002 08:03:20.306142  491185 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:03:20.310309  491185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h88d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:21.316185  491185 pod_ready.go:94] pod "coredns-66bc5c9577-h88d8" is "Ready"
	I1002 08:03:21.316259  491185 pod_ready.go:86] duration metric: took 1.005911779s for pod "coredns-66bc5c9577-h88d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:21.321068  491185 pod_ready.go:83] waiting for pod "etcd-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:21.328118  491185 pod_ready.go:94] pod "etcd-embed-certs-171347" is "Ready"
	I1002 08:03:21.328204  491185 pod_ready.go:86] duration metric: took 7.054506ms for pod "etcd-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:21.331639  491185 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:21.338531  491185 pod_ready.go:94] pod "kube-apiserver-embed-certs-171347" is "Ready"
	I1002 08:03:21.338607  491185 pod_ready.go:86] duration metric: took 6.895358ms for pod "kube-apiserver-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:21.341228  491185 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:21.514086  491185 pod_ready.go:94] pod "kube-controller-manager-embed-certs-171347" is "Ready"
	I1002 08:03:21.514183  491185 pod_ready.go:86] duration metric: took 172.874306ms for pod "kube-controller-manager-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:21.715588  491185 pod_ready.go:83] waiting for pod "kube-proxy-jzmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:22.118996  491185 pod_ready.go:94] pod "kube-proxy-jzmxf" is "Ready"
	I1002 08:03:22.119025  491185 pod_ready.go:86] duration metric: took 403.406808ms for pod "kube-proxy-jzmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:22.315019  491185 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:22.713780  491185 pod_ready.go:94] pod "kube-scheduler-embed-certs-171347" is "Ready"
	I1002 08:03:22.713808  491185 pod_ready.go:86] duration metric: took 398.759529ms for pod "kube-scheduler-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:22.713821  491185 pod_ready.go:40] duration metric: took 2.407648078s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:03:22.803225  491185 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:03:22.806665  491185 out.go:179] * Done! kubectl is now configured to use "embed-certs-171347" cluster and "default" namespace by default
	I1002 08:03:22.911271  495337 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.1680598s)
	I1002 08:03:22.911288  495337 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.165600364s)
	I1002 08:03:22.911319  495337 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.038247519s)
	I1002 08:03:22.911335  495337 node_ready.go:35] waiting up to 6m0s for node "no-preload-604182" to be "Ready" ...
	I1002 08:03:22.911495  495337 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.800322974s)
	I1002 08:03:22.914674  495337 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-604182 addons enable metrics-server
	
	I1002 08:03:22.999765  495337 node_ready.go:49] node "no-preload-604182" is "Ready"
	I1002 08:03:22.999792  495337 node_ready.go:38] duration metric: took 88.443129ms for node "no-preload-604182" to be "Ready" ...
	I1002 08:03:22.999806  495337 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:03:22.999881  495337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:03:23.026297  495337 api_server.go:72] duration metric: took 6.6847563s to wait for apiserver process to appear ...
	I1002 08:03:23.026383  495337 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:03:23.026406  495337 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 08:03:23.030628  495337 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1002 08:03:23.033449  495337 addons.go:514] duration metric: took 6.691703383s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1002 08:03:23.046538  495337 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 08:03:23.052039  495337 api_server.go:141] control plane version: v1.34.1
	I1002 08:03:23.052071  495337 api_server.go:131] duration metric: took 25.67871ms to wait for apiserver health ...
	I1002 08:03:23.052081  495337 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:03:23.062545  495337 system_pods.go:59] 8 kube-system pods found
	I1002 08:03:23.062588  495337 system_pods.go:61] "coredns-66bc5c9577-74zfp" [0aa93160-9105-470c-b62f-c8d0949da486] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:03:23.062604  495337 system_pods.go:61] "etcd-no-preload-604182" [3f1d8eef-d3ec-41bd-856b-dd8687a2862e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:03:23.062611  495337 system_pods.go:61] "kindnet-5zjv7" [578c1406-9933-4a37-9826-4f696b5a3e38] Running
	I1002 08:03:23.062620  495337 system_pods.go:61] "kube-apiserver-no-preload-604182" [51e008fc-06a1-447a-b35a-ac2dc7470dad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:03:23.062630  495337 system_pods.go:61] "kube-controller-manager-no-preload-604182" [732af5c5-135d-4cc9-9df1-ae4053eae345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:03:23.062637  495337 system_pods.go:61] "kube-proxy-qn6pp" [fe309cb8-ddea-4301-a231-bf301f3e25d6] Running
	I1002 08:03:23.062651  495337 system_pods.go:61] "kube-scheduler-no-preload-604182" [659bcac4-deed-4fa0-ae78-72e9e27c83da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:03:23.062656  495337 system_pods.go:61] "storage-provisioner" [323f91d2-af51-47c9-8da6-0768f4dc30ab] Running
	I1002 08:03:23.062667  495337 system_pods.go:74] duration metric: took 10.577074ms to wait for pod list to return data ...
	I1002 08:03:23.062679  495337 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:03:23.077143  495337 default_sa.go:45] found service account: "default"
	I1002 08:03:23.077170  495337 default_sa.go:55] duration metric: took 14.481446ms for default service account to be created ...
	I1002 08:03:23.077180  495337 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:03:23.085290  495337 system_pods.go:86] 8 kube-system pods found
	I1002 08:03:23.085321  495337 system_pods.go:89] "coredns-66bc5c9577-74zfp" [0aa93160-9105-470c-b62f-c8d0949da486] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:03:23.085330  495337 system_pods.go:89] "etcd-no-preload-604182" [3f1d8eef-d3ec-41bd-856b-dd8687a2862e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:03:23.085335  495337 system_pods.go:89] "kindnet-5zjv7" [578c1406-9933-4a37-9826-4f696b5a3e38] Running
	I1002 08:03:23.085342  495337 system_pods.go:89] "kube-apiserver-no-preload-604182" [51e008fc-06a1-447a-b35a-ac2dc7470dad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:03:23.085348  495337 system_pods.go:89] "kube-controller-manager-no-preload-604182" [732af5c5-135d-4cc9-9df1-ae4053eae345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:03:23.085354  495337 system_pods.go:89] "kube-proxy-qn6pp" [fe309cb8-ddea-4301-a231-bf301f3e25d6] Running
	I1002 08:03:23.085360  495337 system_pods.go:89] "kube-scheduler-no-preload-604182" [659bcac4-deed-4fa0-ae78-72e9e27c83da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:03:23.085364  495337 system_pods.go:89] "storage-provisioner" [323f91d2-af51-47c9-8da6-0768f4dc30ab] Running
	I1002 08:03:23.085373  495337 system_pods.go:126] duration metric: took 8.184601ms to wait for k8s-apps to be running ...
	I1002 08:03:23.085381  495337 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:03:23.085436  495337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:03:23.127254  495337 system_svc.go:56] duration metric: took 41.862786ms WaitForService to wait for kubelet
	I1002 08:03:23.127284  495337 kubeadm.go:586] duration metric: took 6.785745587s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:03:23.127316  495337 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:03:23.137041  495337 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:03:23.137090  495337 node_conditions.go:123] node cpu capacity is 2
	I1002 08:03:23.137136  495337 node_conditions.go:105] duration metric: took 9.813834ms to run NodePressure ...
	I1002 08:03:23.137149  495337 start.go:241] waiting for startup goroutines ...
	I1002 08:03:23.137160  495337 start.go:246] waiting for cluster config update ...
	I1002 08:03:23.137182  495337 start.go:255] writing updated cluster config ...
	I1002 08:03:23.137549  495337 ssh_runner.go:195] Run: rm -f paused
	I1002 08:03:23.142646  495337 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:03:23.148054  495337 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-74zfp" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 08:03:25.166832  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	W1002 08:03:27.653955  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 02 08:03:19 embed-certs-171347 crio[840]: time="2025-10-02T08:03:19.896326478Z" level=info msg="Created container 694a81c6ca0437778da0ab9218f2f1049dd49a85b06f2bf626c843da2bd25a0c: kube-system/storage-provisioner/storage-provisioner" id=fb0c3d77-6a70-493d-a430-28963feef07e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:03:19 embed-certs-171347 crio[840]: time="2025-10-02T08:03:19.897115425Z" level=info msg="Starting container: 694a81c6ca0437778da0ab9218f2f1049dd49a85b06f2bf626c843da2bd25a0c" id=348ca647-009a-4c50-a180-1da370a5b91b name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:03:19 embed-certs-171347 crio[840]: time="2025-10-02T08:03:19.898901755Z" level=info msg="Started container" PID=1703 containerID=694a81c6ca0437778da0ab9218f2f1049dd49a85b06f2bf626c843da2bd25a0c description=kube-system/storage-provisioner/storage-provisioner id=348ca647-009a-4c50-a180-1da370a5b91b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a20dd53e6fe1562ef4819a8b1aaa2bf6688b8988fbd00cfcbe4ff245513b2ba7
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.418916222Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4a29e6ed-a99e-4896-9e54-d68db229f3bf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.418994787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.435731909Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bd8c90ce024d6a50f247261d62897d1f95ff8346cc9b8a4f7c14bd5e8382d400 UID:16034ac6-463d-44ce-8c88-afe3eeeec748 NetNS:/var/run/netns/5fc752b8-5baf-4849-bba9-b89e0be47fb1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049ee20}] Aliases:map[]}"
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.436035305Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.44643737Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bd8c90ce024d6a50f247261d62897d1f95ff8346cc9b8a4f7c14bd5e8382d400 UID:16034ac6-463d-44ce-8c88-afe3eeeec748 NetNS:/var/run/netns/5fc752b8-5baf-4849-bba9-b89e0be47fb1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049ee20}] Aliases:map[]}"
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.446815474Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.451904645Z" level=info msg="Ran pod sandbox bd8c90ce024d6a50f247261d62897d1f95ff8346cc9b8a4f7c14bd5e8382d400 with infra container: default/busybox/POD" id=4a29e6ed-a99e-4896-9e54-d68db229f3bf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.453382246Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=17fd2b6c-48ad-44f2-b248-8aaa7c4f813c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.453951572Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=17fd2b6c-48ad-44f2-b248-8aaa7c4f813c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.454080985Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=17fd2b6c-48ad-44f2-b248-8aaa7c4f813c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.458256908Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9d342364-ba69-48d0-8ea0-8a5199da0d1a name=/runtime.v1.ImageService/PullImage
	Oct 02 08:03:23 embed-certs-171347 crio[840]: time="2025-10-02T08:03:23.461790799Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.472535506Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9d342364-ba69-48d0-8ea0-8a5199da0d1a name=/runtime.v1.ImageService/PullImage
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.473707488Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0e5ef1d7-3dcf-4e9c-ba9c-d0c86f94cc46 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.475481273Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99dc5912-05d7-4fdf-ab6f-ceb8bb1f9024 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.483209656Z" level=info msg="Creating container: default/busybox/busybox" id=f25a5967-c32c-4da7-b319-18962be9f0e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.484064343Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.490368294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.491237102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.507441764Z" level=info msg="Created container a5a2e9ead2a267ad7930cfa532359995862de08485bbdd6a43a88a1b30b45391: default/busybox/busybox" id=f25a5967-c32c-4da7-b319-18962be9f0e9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.508396267Z" level=info msg="Starting container: a5a2e9ead2a267ad7930cfa532359995862de08485bbdd6a43a88a1b30b45391" id=eebeec37-4b8f-4024-884a-3bea9d795e47 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:03:25 embed-certs-171347 crio[840]: time="2025-10-02T08:03:25.511224609Z" level=info msg="Started container" PID=1762 containerID=a5a2e9ead2a267ad7930cfa532359995862de08485bbdd6a43a88a1b30b45391 description=default/busybox/busybox id=eebeec37-4b8f-4024-884a-3bea9d795e47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd8c90ce024d6a50f247261d62897d1f95ff8346cc9b8a4f7c14bd5e8382d400
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a5a2e9ead2a26       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   bd8c90ce024d6       busybox                                      default
	694a81c6ca043       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   a20dd53e6fe15       storage-provisioner                          kube-system
	3ba9bf7dbf363       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   1d45da7bdacd9       coredns-66bc5c9577-h88d8                     kube-system
	3f4b7d704f993       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   d05a92f9f7ccc       kube-proxy-jzmxf                             kube-system
	5b0ee0228e07d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   4565d70caa387       kindnet-q6rpr                                kube-system
	69c86a82b5333       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   84086ebe42002       kube-apiserver-embed-certs-171347            kube-system
	38fe8452ecf83       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   27e2698b53196       etcd-embed-certs-171347                      kube-system
	22e97ef60d6d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   1385c44df9f69       kube-scheduler-embed-certs-171347            kube-system
	5919690527e2d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   12177ace786d8       kube-controller-manager-embed-certs-171347   kube-system
	
	
	==> coredns [3ba9bf7dbf36310eea55b3ea13aabcbb062437b36f3b56908c2e49d5fd25c346] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47962 - 53465 "HINFO IN 4795930393825096667.6694866942185763431. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034329095s
	
	
	==> describe nodes <==
	Name:               embed-certs-171347
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-171347
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=embed-certs-171347
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_02_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:02:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-171347
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:03:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:03:33 +0000   Thu, 02 Oct 2025 08:02:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:03:33 +0000   Thu, 02 Oct 2025 08:02:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:03:33 +0000   Thu, 02 Oct 2025 08:02:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:03:33 +0000   Thu, 02 Oct 2025 08:03:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-171347
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e160000b10634354a856c255beea4d6d
	  System UUID:                73993af2-e810-4ff8-b445-81bcd4ff9d18
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-h88d8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-171347                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-q6rpr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-171347             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-embed-certs-171347    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-jzmxf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-171347             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node embed-certs-171347 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 74s)  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node embed-certs-171347 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node embed-certs-171347 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node embed-certs-171347 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-171347 event: Registered Node embed-certs-171347 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-171347 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 07:32] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [38fe8452ecf8302e3c6e66d45e3c35de60a367d4cbca111e7c12c04278d9f853] <==
	{"level":"warn","ts":"2025-10-02T08:02:24.572409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.595058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.619619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.663057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.689844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.753112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.769816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.789718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.807408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.838612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.872737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.913682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.947928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:24.974150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.009476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.042823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.079411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.130863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.177980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.206917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.247643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.276210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.292541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.315885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:02:25.497033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37234","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:03:33 up  2:46,  0 user,  load average: 5.44, 3.10, 2.19
	Linux embed-certs-171347 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b0ee0228e07d3414f9899813d39ad7090b430c3552d176f3c349b31c5ccd827] <==
	I1002 08:02:38.699660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:02:38.700545       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 08:02:38.700695       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:02:38.700714       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:02:38.700730       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:02:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:02:38.900238       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:02:38.900317       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:02:38.900395       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:02:38.901694       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:03:08.901505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 08:03:08.901647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 08:03:08.901737       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 08:03:08.901829       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 08:03:10.005007       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:03:10.005137       1 metrics.go:72] Registering metrics
	I1002 08:03:10.005236       1 controller.go:711] "Syncing nftables rules"
	I1002 08:03:18.907139       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 08:03:18.907255       1 main.go:301] handling current node
	I1002 08:03:28.900456       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 08:03:28.900509       1 main.go:301] handling current node
	
	
	==> kube-apiserver [69c86a82b533383928aa20516774e4f321f3b4ab3effb1d3ae5fafba0e5c0dcd] <==
	I1002 08:02:27.689394       1 cache.go:39] Caches are synced for autoregister controller
	I1002 08:02:27.716136       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:02:27.739607       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:02:27.739725       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 08:02:27.786454       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:02:27.824097       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:02:27.824164       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 08:02:28.043454       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 08:02:28.067349       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 08:02:28.067376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:02:30.212181       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:02:30.310924       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:02:30.441108       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 08:02:30.467927       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1002 08:02:30.469952       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:02:30.478474       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:02:31.180663       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:02:31.540274       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:02:31.577552       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 08:02:31.593506       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 08:02:36.236869       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:02:36.248711       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:02:37.172338       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1002 08:02:37.266371       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1002 08:03:31.293769       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:45034: use of closed network connection
	
	
	==> kube-controller-manager [5919690527e2d30d67c64c5df89560f1e86321a6e57a3ca394ebd3d3b198c21e] <==
	I1002 08:02:36.185085       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 08:02:36.191147       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 08:02:36.195282       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 08:02:36.208952       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 08:02:36.214425       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 08:02:36.214829       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 08:02:36.214962       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:02:36.215033       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 08:02:36.217247       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:02:36.217368       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 08:02:36.217551       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 08:02:36.217797       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 08:02:36.217819       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 08:02:36.218382       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 08:02:36.218857       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 08:02:36.222907       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 08:02:36.223847       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 08:02:36.226696       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-171347" podCIDRs=["10.244.0.0/24"]
	I1002 08:02:36.232665       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 08:02:36.233018       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 08:02:36.235218       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 08:02:36.267454       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:02:36.267545       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 08:02:36.267574       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 08:03:21.192782       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3f4b7d704f99356c1a8a35e557c2410b9895623ca78b574c732d7cb41bfa3cad] <==
	I1002 08:02:39.146318       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:02:39.220663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:02:39.321190       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:02:39.321248       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 08:02:39.321337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:02:39.340738       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:02:39.340786       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:02:39.344753       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:02:39.345050       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:02:39.345073       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:02:39.351215       1 config.go:200] "Starting service config controller"
	I1002 08:02:39.351234       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:02:39.351703       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:02:39.351713       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:02:39.351748       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:02:39.351755       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:02:39.357160       1 config.go:309] "Starting node config controller"
	I1002 08:02:39.357183       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:02:39.357193       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:02:39.451842       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:02:39.451846       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:02:39.451885       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [22e97ef60d6d3d04fbbfd69af98167be898a9d7e8bc2ebf69670d82fd3953f98] <==
	I1002 08:02:25.479721       1 serving.go:386] Generated self-signed cert in-memory
	I1002 08:02:30.503370       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 08:02:30.504939       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:02:30.521858       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:02:30.522070       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:02:30.529336       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:02:30.522082       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:02:30.535646       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:02:30.522095       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:02:30.522038       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 08:02:30.538115       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 08:02:30.630776       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:02:30.636866       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:02:30.638771       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 02 08:02:37 embed-certs-171347 kubelet[1284]: I1002 08:02:37.351703    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0bb71089-73b5-4b6c-92cd-0c4ba1aee456-kube-proxy\") pod \"kube-proxy-jzmxf\" (UID: \"0bb71089-73b5-4b6c-92cd-0c4ba1aee456\") " pod="kube-system/kube-proxy-jzmxf"
	Oct 02 08:02:37 embed-certs-171347 kubelet[1284]: I1002 08:02:37.351750    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bb71089-73b5-4b6c-92cd-0c4ba1aee456-lib-modules\") pod \"kube-proxy-jzmxf\" (UID: \"0bb71089-73b5-4b6c-92cd-0c4ba1aee456\") " pod="kube-system/kube-proxy-jzmxf"
	Oct 02 08:02:37 embed-certs-171347 kubelet[1284]: I1002 08:02:37.351769    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/debb56b0-5037-4c8f-83f9-277929580103-lib-modules\") pod \"kindnet-q6rpr\" (UID: \"debb56b0-5037-4c8f-83f9-277929580103\") " pod="kube-system/kindnet-q6rpr"
	Oct 02 08:02:37 embed-certs-171347 kubelet[1284]: I1002 08:02:37.351788    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26d66\" (UniqueName: \"kubernetes.io/projected/debb56b0-5037-4c8f-83f9-277929580103-kube-api-access-26d66\") pod \"kindnet-q6rpr\" (UID: \"debb56b0-5037-4c8f-83f9-277929580103\") " pod="kube-system/kindnet-q6rpr"
	Oct 02 08:02:37 embed-certs-171347 kubelet[1284]: I1002 08:02:37.351817    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bb71089-73b5-4b6c-92cd-0c4ba1aee456-xtables-lock\") pod \"kube-proxy-jzmxf\" (UID: \"0bb71089-73b5-4b6c-92cd-0c4ba1aee456\") " pod="kube-system/kube-proxy-jzmxf"
	Oct 02 08:02:37 embed-certs-171347 kubelet[1284]: I1002 08:02:37.351833    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhxkr\" (UniqueName: \"kubernetes.io/projected/0bb71089-73b5-4b6c-92cd-0c4ba1aee456-kube-api-access-fhxkr\") pod \"kube-proxy-jzmxf\" (UID: \"0bb71089-73b5-4b6c-92cd-0c4ba1aee456\") " pod="kube-system/kube-proxy-jzmxf"
	Oct 02 08:02:37 embed-certs-171347 kubelet[1284]: I1002 08:02:37.351849    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/debb56b0-5037-4c8f-83f9-277929580103-cni-cfg\") pod \"kindnet-q6rpr\" (UID: \"debb56b0-5037-4c8f-83f9-277929580103\") " pod="kube-system/kindnet-q6rpr"
	Oct 02 08:02:37 embed-certs-171347 kubelet[1284]: I1002 08:02:37.351865    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/debb56b0-5037-4c8f-83f9-277929580103-xtables-lock\") pod \"kindnet-q6rpr\" (UID: \"debb56b0-5037-4c8f-83f9-277929580103\") " pod="kube-system/kindnet-q6rpr"
	Oct 02 08:02:38 embed-certs-171347 kubelet[1284]: I1002 08:02:38.301981    1284 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 08:02:38 embed-certs-171347 kubelet[1284]: E1002 08:02:38.458449    1284 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:02:38 embed-certs-171347 kubelet[1284]: E1002 08:02:38.462635    1284 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0bb71089-73b5-4b6c-92cd-0c4ba1aee456-kube-proxy podName:0bb71089-73b5-4b6c-92cd-0c4ba1aee456 nodeName:}" failed. No retries permitted until 2025-10-02 08:02:38.960749284 +0000 UTC m=+7.487118622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0bb71089-73b5-4b6c-92cd-0c4ba1aee456-kube-proxy") pod "kube-proxy-jzmxf" (UID: "0bb71089-73b5-4b6c-92cd-0c4ba1aee456") : failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:02:38 embed-certs-171347 kubelet[1284]: I1002 08:02:38.855558    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-q6rpr" podStartSLOduration=1.85553959 podStartE2EDuration="1.85553959s" podCreationTimestamp="2025-10-02 08:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:02:38.855294886 +0000 UTC m=+7.381664225" watchObservedRunningTime="2025-10-02 08:02:38.85553959 +0000 UTC m=+7.381908929"
	Oct 02 08:02:39 embed-certs-171347 kubelet[1284]: W1002 08:02:39.051184    1284 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/crio-d05a92f9f7ccc06683054f6f4f1615276430b2550d810bfce0056675b23d28ac WatchSource:0}: Error finding container d05a92f9f7ccc06683054f6f4f1615276430b2550d810bfce0056675b23d28ac: Status 404 returned error can't find the container with id d05a92f9f7ccc06683054f6f4f1615276430b2550d810bfce0056675b23d28ac
	Oct 02 08:02:41 embed-certs-171347 kubelet[1284]: I1002 08:02:41.098658    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jzmxf" podStartSLOduration=4.098637289 podStartE2EDuration="4.098637289s" podCreationTimestamp="2025-10-02 08:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:02:39.858738021 +0000 UTC m=+8.385107368" watchObservedRunningTime="2025-10-02 08:02:41.098637289 +0000 UTC m=+9.625006644"
	Oct 02 08:03:19 embed-certs-171347 kubelet[1284]: I1002 08:03:19.369786    1284 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 08:03:19 embed-certs-171347 kubelet[1284]: I1002 08:03:19.447313    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b206ffb9-0004-486d-98ff-d23a63b69555-tmp\") pod \"storage-provisioner\" (UID: \"b206ffb9-0004-486d-98ff-d23a63b69555\") " pod="kube-system/storage-provisioner"
	Oct 02 08:03:19 embed-certs-171347 kubelet[1284]: I1002 08:03:19.447558    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrcn\" (UniqueName: \"kubernetes.io/projected/2f1ec40b-c756-4c21-b68c-293d99715917-kube-api-access-nsrcn\") pod \"coredns-66bc5c9577-h88d8\" (UID: \"2f1ec40b-c756-4c21-b68c-293d99715917\") " pod="kube-system/coredns-66bc5c9577-h88d8"
	Oct 02 08:03:19 embed-certs-171347 kubelet[1284]: I1002 08:03:19.447672    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwqwq\" (UniqueName: \"kubernetes.io/projected/b206ffb9-0004-486d-98ff-d23a63b69555-kube-api-access-qwqwq\") pod \"storage-provisioner\" (UID: \"b206ffb9-0004-486d-98ff-d23a63b69555\") " pod="kube-system/storage-provisioner"
	Oct 02 08:03:19 embed-certs-171347 kubelet[1284]: I1002 08:03:19.447791    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f1ec40b-c756-4c21-b68c-293d99715917-config-volume\") pod \"coredns-66bc5c9577-h88d8\" (UID: \"2f1ec40b-c756-4c21-b68c-293d99715917\") " pod="kube-system/coredns-66bc5c9577-h88d8"
	Oct 02 08:03:19 embed-certs-171347 kubelet[1284]: W1002 08:03:19.752315    1284 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/crio-1d45da7bdacd9c41375c5c95a86c1744e233eb01987da9272805f2d71d6f40c9 WatchSource:0}: Error finding container 1d45da7bdacd9c41375c5c95a86c1744e233eb01987da9272805f2d71d6f40c9: Status 404 returned error can't find the container with id 1d45da7bdacd9c41375c5c95a86c1744e233eb01987da9272805f2d71d6f40c9
	Oct 02 08:03:19 embed-certs-171347 kubelet[1284]: W1002 08:03:19.769372    1284 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/crio-a20dd53e6fe1562ef4819a8b1aaa2bf6688b8988fbd00cfcbe4ff245513b2ba7 WatchSource:0}: Error finding container a20dd53e6fe1562ef4819a8b1aaa2bf6688b8988fbd00cfcbe4ff245513b2ba7: Status 404 returned error can't find the container with id a20dd53e6fe1562ef4819a8b1aaa2bf6688b8988fbd00cfcbe4ff245513b2ba7
	Oct 02 08:03:20 embed-certs-171347 kubelet[1284]: I1002 08:03:20.013198    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.013177565 podStartE2EDuration="43.013177565s" podCreationTimestamp="2025-10-02 08:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:03:20.00744449 +0000 UTC m=+48.533813845" watchObservedRunningTime="2025-10-02 08:03:20.013177565 +0000 UTC m=+48.539546912"
	Oct 02 08:03:20 embed-certs-171347 kubelet[1284]: I1002 08:03:20.984328    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h88d8" podStartSLOduration=43.984210532 podStartE2EDuration="43.984210532s" podCreationTimestamp="2025-10-02 08:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:03:20.032265117 +0000 UTC m=+48.558634472" watchObservedRunningTime="2025-10-02 08:03:20.984210532 +0000 UTC m=+49.510579879"
	Oct 02 08:03:23 embed-certs-171347 kubelet[1284]: I1002 08:03:23.191015    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8nl6\" (UniqueName: \"kubernetes.io/projected/16034ac6-463d-44ce-8c88-afe3eeeec748-kube-api-access-v8nl6\") pod \"busybox\" (UID: \"16034ac6-463d-44ce-8c88-afe3eeeec748\") " pod="default/busybox"
	Oct 02 08:03:23 embed-certs-171347 kubelet[1284]: W1002 08:03:23.449125    1284 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/crio-bd8c90ce024d6a50f247261d62897d1f95ff8346cc9b8a4f7c14bd5e8382d400 WatchSource:0}: Error finding container bd8c90ce024d6a50f247261d62897d1f95ff8346cc9b8a4f7c14bd5e8382d400: Status 404 returned error can't find the container with id bd8c90ce024d6a50f247261d62897d1f95ff8346cc9b8a4f7c14bd5e8382d400
	
	
	==> storage-provisioner [694a81c6ca0437778da0ab9218f2f1049dd49a85b06f2bf626c843da2bd25a0c] <==
	I1002 08:03:19.998387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 08:03:20.045643       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:03:20.045720       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 08:03:20.048392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:20.056826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:03:20.056978       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:03:20.059610       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5810a4b1-cc04-4b0a-996b-984738abc721", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-171347_ebd1bfeb-b1dd-4de8-b7cb-b2768bbaddff became leader
	I1002 08:03:20.059657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-171347_ebd1bfeb-b1dd-4de8-b7cb-b2768bbaddff!
	W1002 08:03:20.060153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:20.076552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:03:20.159984       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-171347_ebd1bfeb-b1dd-4de8-b7cb-b2768bbaddff!
	W1002 08:03:22.080089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:22.086822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:24.094711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:24.106700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:26.111595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:26.118248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:28.121962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:28.130369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:30.133492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:30.139461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:32.148647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:32.159803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-171347 -n embed-certs-171347
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-171347 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-604182 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-604182 --alsologtostderr -v=1: exit status 80 (2.318957705s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-604182 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 08:04:12.548255  500419 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:04:12.548464  500419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:04:12.548488  500419 out.go:374] Setting ErrFile to fd 2...
	I1002 08:04:12.548507  500419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:04:12.548801  500419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:04:12.550097  500419 out.go:368] Setting JSON to false
	I1002 08:04:12.550151  500419 mustload.go:65] Loading cluster: no-preload-604182
	I1002 08:04:12.550616  500419 config.go:182] Loaded profile config "no-preload-604182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:04:12.551109  500419 cli_runner.go:164] Run: docker container inspect no-preload-604182 --format={{.State.Status}}
	I1002 08:04:12.586526  500419 host.go:66] Checking if "no-preload-604182" exists ...
	I1002 08:04:12.586858  500419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:04:12.678038  500419 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-02 08:04:12.668687739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:04:12.678682  500419 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-604182 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 08:04:12.682765  500419 out.go:179] * Pausing node no-preload-604182 ... 
	I1002 08:04:12.686568  500419 host.go:66] Checking if "no-preload-604182" exists ...
	I1002 08:04:12.686908  500419 ssh_runner.go:195] Run: systemctl --version
	I1002 08:04:12.686959  500419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-604182
	I1002 08:04:12.709854  500419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/no-preload-604182/id_rsa Username:docker}
	I1002 08:04:12.814304  500419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:04:12.844482  500419 pause.go:51] kubelet running: true
	I1002 08:04:12.844548  500419 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:04:13.242605  500419 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:04:13.242702  500419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:04:13.352906  500419 cri.go:89] found id: "cf91d8320e31d1cdb4432930b9af5cdeb2b936e99b90b8a85fab2f65fd803d34"
	I1002 08:04:13.352977  500419 cri.go:89] found id: "c8704e5c1cedb8c825267c9042fc932867f91cfb0f2a0998dac40e8955311969"
	I1002 08:04:13.353018  500419 cri.go:89] found id: "941fe1e375ab8b5c7819755f3d2feb5bcdaf2abeb7390f95036f174f13325d9f"
	I1002 08:04:13.353039  500419 cri.go:89] found id: "149d1489fe735da77b26aff3ec794c7c79ef2de589160921415fb965adcead0f"
	I1002 08:04:13.353059  500419 cri.go:89] found id: "345b977a1a5ae88c18319ef442b740dfcd5b6f2cff29fd84e21439458e7a131c"
	I1002 08:04:13.353098  500419 cri.go:89] found id: "4164431db5f8614c900dab52a55fbc230192e5350939fe5d0d56bfc4b9f37616"
	I1002 08:04:13.353118  500419 cri.go:89] found id: "3e1fc7a1946e3a39d39fe7e56e659a01f9a77a1b064829ae68f8e7533e1798bc"
	I1002 08:04:13.353140  500419 cri.go:89] found id: "77029f6aa5b6233463612c47bb436aebdb6578cbd16ee091398e61c2c07d4608"
	I1002 08:04:13.353174  500419 cri.go:89] found id: "3094807a90d6dcd41655425e2f8000995d5181c4b8e85810c853b4db03b96dc4"
	I1002 08:04:13.353202  500419 cri.go:89] found id: "3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea"
	I1002 08:04:13.353221  500419 cri.go:89] found id: "7085a5c11d9068aa1530ac0f9c639bae1b8214bbc8ae69419c1885816bfc2422"
	I1002 08:04:13.353252  500419 cri.go:89] found id: ""
	I1002 08:04:13.353327  500419 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:04:13.367405  500419 retry.go:31] will retry after 348.492155ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:04:13Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:04:13.716755  500419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:04:13.734038  500419 pause.go:51] kubelet running: false
	I1002 08:04:13.734181  500419 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:04:13.980109  500419 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:04:13.980252  500419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:04:14.104349  500419 cri.go:89] found id: "cf91d8320e31d1cdb4432930b9af5cdeb2b936e99b90b8a85fab2f65fd803d34"
	I1002 08:04:14.104426  500419 cri.go:89] found id: "c8704e5c1cedb8c825267c9042fc932867f91cfb0f2a0998dac40e8955311969"
	I1002 08:04:14.104447  500419 cri.go:89] found id: "941fe1e375ab8b5c7819755f3d2feb5bcdaf2abeb7390f95036f174f13325d9f"
	I1002 08:04:14.104471  500419 cri.go:89] found id: "149d1489fe735da77b26aff3ec794c7c79ef2de589160921415fb965adcead0f"
	I1002 08:04:14.104503  500419 cri.go:89] found id: "345b977a1a5ae88c18319ef442b740dfcd5b6f2cff29fd84e21439458e7a131c"
	I1002 08:04:14.104523  500419 cri.go:89] found id: "4164431db5f8614c900dab52a55fbc230192e5350939fe5d0d56bfc4b9f37616"
	I1002 08:04:14.104545  500419 cri.go:89] found id: "3e1fc7a1946e3a39d39fe7e56e659a01f9a77a1b064829ae68f8e7533e1798bc"
	I1002 08:04:14.104579  500419 cri.go:89] found id: "77029f6aa5b6233463612c47bb436aebdb6578cbd16ee091398e61c2c07d4608"
	I1002 08:04:14.104604  500419 cri.go:89] found id: "3094807a90d6dcd41655425e2f8000995d5181c4b8e85810c853b4db03b96dc4"
	I1002 08:04:14.104636  500419 cri.go:89] found id: "3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea"
	I1002 08:04:14.104665  500419 cri.go:89] found id: "7085a5c11d9068aa1530ac0f9c639bae1b8214bbc8ae69419c1885816bfc2422"
	I1002 08:04:14.104685  500419 cri.go:89] found id: ""
	I1002 08:04:14.104765  500419 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:04:14.118832  500419 retry.go:31] will retry after 245.592455ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:04:14Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:04:14.365094  500419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:04:14.391022  500419 pause.go:51] kubelet running: false
	I1002 08:04:14.391177  500419 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:04:14.665114  500419 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:04:14.665267  500419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:04:14.748103  500419 cri.go:89] found id: "cf91d8320e31d1cdb4432930b9af5cdeb2b936e99b90b8a85fab2f65fd803d34"
	I1002 08:04:14.748174  500419 cri.go:89] found id: "c8704e5c1cedb8c825267c9042fc932867f91cfb0f2a0998dac40e8955311969"
	I1002 08:04:14.748194  500419 cri.go:89] found id: "941fe1e375ab8b5c7819755f3d2feb5bcdaf2abeb7390f95036f174f13325d9f"
	I1002 08:04:14.748214  500419 cri.go:89] found id: "149d1489fe735da77b26aff3ec794c7c79ef2de589160921415fb965adcead0f"
	I1002 08:04:14.748238  500419 cri.go:89] found id: "345b977a1a5ae88c18319ef442b740dfcd5b6f2cff29fd84e21439458e7a131c"
	I1002 08:04:14.748272  500419 cri.go:89] found id: "4164431db5f8614c900dab52a55fbc230192e5350939fe5d0d56bfc4b9f37616"
	I1002 08:04:14.748290  500419 cri.go:89] found id: "3e1fc7a1946e3a39d39fe7e56e659a01f9a77a1b064829ae68f8e7533e1798bc"
	I1002 08:04:14.748308  500419 cri.go:89] found id: "77029f6aa5b6233463612c47bb436aebdb6578cbd16ee091398e61c2c07d4608"
	I1002 08:04:14.748339  500419 cri.go:89] found id: "3094807a90d6dcd41655425e2f8000995d5181c4b8e85810c853b4db03b96dc4"
	I1002 08:04:14.748362  500419 cri.go:89] found id: "3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea"
	I1002 08:04:14.748385  500419 cri.go:89] found id: "7085a5c11d9068aa1530ac0f9c639bae1b8214bbc8ae69419c1885816bfc2422"
	I1002 08:04:14.748417  500419 cri.go:89] found id: ""
	I1002 08:04:14.748495  500419 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:04:14.767440  500419 out.go:203] 
	W1002 08:04:14.771059  500419 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:04:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:04:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 08:04:14.771117  500419 out.go:285] * 
	* 
	W1002 08:04:14.779172  500419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 08:04:14.783948  500419 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-604182 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-604182
helpers_test.go:243: (dbg) docker inspect no-preload-604182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd",
	        "Created": "2025-10-02T08:01:27.464953821Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495462,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:03:08.837901343Z",
	            "FinishedAt": "2025-10-02T08:03:08.014060494Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/hosts",
	        "LogPath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd-json.log",
	        "Name": "/no-preload-604182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-604182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-604182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd",
	                "LowerDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-604182",
	                "Source": "/var/lib/docker/volumes/no-preload-604182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-604182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-604182",
	                "name.minikube.sigs.k8s.io": "no-preload-604182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4748f59ce4b03d04f32b8bbf44aa9636009784c2190ef9b48c166c098d23ff4b",
	            "SandboxKey": "/var/run/docker/netns/4748f59ce4b0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-604182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:5d:bd:80:5a:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b49b2bd463034ec68025fea3957066414ae3acd9986e1db0b657dcf84796d697",
	                    "EndpointID": "7aa11ca2f66453e398e22eb741b563208c0a05a7d810b21364396a012be4e426",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-604182",
	                        "eb7634b68495"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-604182 -n no-preload-604182
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-604182 -n no-preload-604182: exit status 2 (443.497481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-604182 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-604182 logs -n 25: (1.595171897s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-654417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-654417    │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ delete  │ -p cert-options-654417                                                                                                                                                                                                                        │ cert-options-654417    │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:59 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-759246 │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 08:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │                     │
	│ stop    │ -p old-k8s-version-356986 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-356986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:01 UTC │
	│ image   │ old-k8s-version-356986 image list --format=json                                                                                                                                                                                               │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ pause   │ -p old-k8s-version-356986 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │                     │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:02 UTC │
	│ delete  │ -p cert-expiration-759246                                                                                                                                                                                                                     │ cert-expiration-759246 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │                     │
	│ stop    │ -p no-preload-604182 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p no-preload-604182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ stop    │ -p embed-certs-171347 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-171347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ image   │ no-preload-604182 image list --format=json                                                                                                                                                                                                    │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:03:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:03:47.207337  498230 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:03:47.207477  498230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:03:47.207490  498230 out.go:374] Setting ErrFile to fd 2...
	I1002 08:03:47.207495  498230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:03:47.207782  498230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:03:47.208239  498230 out.go:368] Setting JSON to false
	I1002 08:03:47.209341  498230 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9979,"bootTime":1759382249,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:03:47.209420  498230 start.go:140] virtualization:  
	I1002 08:03:47.212701  498230 out.go:179] * [embed-certs-171347] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:03:47.219232  498230 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:03:47.219269  498230 notify.go:220] Checking for updates...
	I1002 08:03:47.225166  498230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:03:47.228208  498230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:03:47.231242  498230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:03:47.234200  498230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:03:47.237078  498230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:03:47.240540  498230 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:03:47.241242  498230 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:03:47.272017  498230 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:03:47.272137  498230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:03:47.347915  498230 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:03:47.337966033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:03:47.348032  498230 docker.go:318] overlay module found
	I1002 08:03:47.351239  498230 out.go:179] * Using the docker driver based on existing profile
	I1002 08:03:47.354068  498230 start.go:304] selected driver: docker
	I1002 08:03:47.354090  498230 start.go:924] validating driver "docker" against &{Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:03:47.354196  498230 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:03:47.354931  498230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:03:47.405699  498230 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:03:47.395896446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:03:47.406045  498230 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:03:47.406083  498230 cni.go:84] Creating CNI manager for ""
	I1002 08:03:47.406151  498230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:03:47.406201  498230 start.go:348] cluster config:
	{Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:03:47.409439  498230 out.go:179] * Starting "embed-certs-171347" primary control-plane node in "embed-certs-171347" cluster
	I1002 08:03:47.412198  498230 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:03:47.415193  498230 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:03:47.418082  498230 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:03:47.418153  498230 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:03:47.418159  498230 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:03:47.418167  498230 cache.go:58] Caching tarball of preloaded images
	I1002 08:03:47.418360  498230 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:03:47.418371  498230 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:03:47.418487  498230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/config.json ...
	I1002 08:03:47.437944  498230 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:03:47.437970  498230 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:03:47.437988  498230 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:03:47.438011  498230 start.go:360] acquireMachinesLock for embed-certs-171347: {Name:mk251fc9b359c61a60beaff4e6d636acffa89ca4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:47.438086  498230 start.go:364] duration metric: took 37.638µs to acquireMachinesLock for "embed-certs-171347"
	I1002 08:03:47.438114  498230 start.go:96] Skipping create...Using existing machine configuration
	I1002 08:03:47.438128  498230 fix.go:54] fixHost starting: 
	I1002 08:03:47.438414  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:47.455405  498230 fix.go:112] recreateIfNeeded on embed-certs-171347: state=Stopped err=<nil>
	W1002 08:03:47.455436  498230 fix.go:138] unexpected machine state, will restart: <nil>
	W1002 08:03:43.653584  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	W1002 08:03:45.654397  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	W1002 08:03:47.655009  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	I1002 08:03:47.458713  498230 out.go:252] * Restarting existing docker container for "embed-certs-171347" ...
	I1002 08:03:47.458799  498230 cli_runner.go:164] Run: docker start embed-certs-171347
	I1002 08:03:47.730694  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:47.753582  498230 kic.go:430] container "embed-certs-171347" state is running.
	I1002 08:03:47.754232  498230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-171347
	I1002 08:03:47.777348  498230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/config.json ...
	I1002 08:03:47.778175  498230 machine.go:93] provisionDockerMachine start ...
	I1002 08:03:47.778336  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:47.803620  498230 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:47.803958  498230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1002 08:03:47.803975  498230 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:03:47.804864  498230 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 08:03:50.942726  498230 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-171347
	
	I1002 08:03:50.942750  498230 ubuntu.go:182] provisioning hostname "embed-certs-171347"
	I1002 08:03:50.942815  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:50.960573  498230 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:50.960888  498230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1002 08:03:50.960911  498230 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-171347 && echo "embed-certs-171347" | sudo tee /etc/hostname
	I1002 08:03:51.110053  498230 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-171347
	
	I1002 08:03:51.110163  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:51.128495  498230 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:51.128911  498230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1002 08:03:51.128936  498230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-171347' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-171347/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-171347' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:03:51.271558  498230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:03:51.271590  498230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:03:51.271610  498230 ubuntu.go:190] setting up certificates
	I1002 08:03:51.271620  498230 provision.go:84] configureAuth start
	I1002 08:03:51.271679  498230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-171347
	I1002 08:03:51.288858  498230 provision.go:143] copyHostCerts
	I1002 08:03:51.288930  498230 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:03:51.288952  498230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:03:51.289029  498230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:03:51.289155  498230 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:03:51.289167  498230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:03:51.289197  498230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:03:51.289263  498230 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:03:51.289273  498230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:03:51.289298  498230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:03:51.289359  498230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.embed-certs-171347 san=[127.0.0.1 192.168.85.2 embed-certs-171347 localhost minikube]
	I1002 08:03:51.625001  498230 provision.go:177] copyRemoteCerts
	I1002 08:03:51.625104  498230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:03:51.625162  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:51.645259  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:51.747471  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 08:03:51.766365  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 08:03:51.786274  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:03:51.806005  498230 provision.go:87] duration metric: took 534.350467ms to configureAuth
	I1002 08:03:51.806086  498230 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:03:51.806346  498230 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:03:51.806547  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:51.825037  498230 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:51.825352  498230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1002 08:03:51.825366  498230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:03:52.144044  498230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:03:52.144144  498230 machine.go:96] duration metric: took 4.365941285s to provisionDockerMachine
	I1002 08:03:52.144172  498230 start.go:293] postStartSetup for "embed-certs-171347" (driver="docker")
	I1002 08:03:52.144226  498230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:03:52.144326  498230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:03:52.144400  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:52.166894  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	W1002 08:03:50.153345  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	W1002 08:03:52.158161  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	I1002 08:03:52.267790  498230 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:03:52.271691  498230 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:03:52.271719  498230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:03:52.271729  498230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:03:52.271788  498230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:03:52.271865  498230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:03:52.271969  498230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:03:52.282671  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:03:52.302467  498230 start.go:296] duration metric: took 158.246611ms for postStartSetup
	I1002 08:03:52.302579  498230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:03:52.302643  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:52.320046  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:52.413345  498230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:03:52.418398  498230 fix.go:56] duration metric: took 4.980271082s for fixHost
	I1002 08:03:52.418422  498230 start.go:83] releasing machines lock for "embed-certs-171347", held for 4.980327674s
	I1002 08:03:52.418499  498230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-171347
	I1002 08:03:52.436540  498230 ssh_runner.go:195] Run: cat /version.json
	I1002 08:03:52.436592  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:52.436614  498230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:03:52.436677  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:52.455003  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:52.456630  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:52.644749  498230 ssh_runner.go:195] Run: systemctl --version
	I1002 08:03:52.653090  498230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:03:52.690194  498230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:03:52.695547  498230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:03:52.695623  498230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:03:52.703635  498230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 08:03:52.703661  498230 start.go:495] detecting cgroup driver to use...
	I1002 08:03:52.703724  498230 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:03:52.703799  498230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:03:52.718972  498230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:03:52.732418  498230 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:03:52.732538  498230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:03:52.748515  498230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:03:52.762615  498230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:03:52.930574  498230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:03:53.062910  498230 docker.go:234] disabling docker service ...
	I1002 08:03:53.063005  498230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:03:53.079302  498230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:03:53.094329  498230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:03:53.232528  498230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:03:53.350952  498230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:03:53.369946  498230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:03:53.386419  498230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:03:53.386553  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.396313  498230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:03:53.396436  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.406298  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.416137  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.433425  498230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:03:53.447908  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.457888  498230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.467856  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.477269  498230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:03:53.485467  498230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:03:53.495744  498230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:03:53.616859  498230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:03:53.765616  498230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:03:53.765730  498230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:03:53.769772  498230 start.go:563] Will wait 60s for crictl version
	I1002 08:03:53.769847  498230 ssh_runner.go:195] Run: which crictl
	I1002 08:03:53.773631  498230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:03:53.802821  498230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:03:53.802921  498230 ssh_runner.go:195] Run: crio --version
	I1002 08:03:53.845591  498230 ssh_runner.go:195] Run: crio --version
	I1002 08:03:53.886539  498230 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 08:03:53.889395  498230 cli_runner.go:164] Run: docker network inspect embed-certs-171347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:03:53.905164  498230 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 08:03:53.909166  498230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:03:53.919272  498230 kubeadm.go:883] updating cluster {Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:03:53.919385  498230 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:03:53.919446  498230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:03:53.961622  498230 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:03:53.961647  498230 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:03:53.961710  498230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:03:53.989571  498230 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:03:53.989600  498230 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:03:53.989609  498230 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 08:03:53.989766  498230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-171347 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:03:53.989863  498230 ssh_runner.go:195] Run: crio config
	I1002 08:03:54.060074  498230 cni.go:84] Creating CNI manager for ""
	I1002 08:03:54.060098  498230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:03:54.060111  498230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:03:54.060135  498230 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-171347 NodeName:embed-certs-171347 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:03:54.060272  498230 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-171347"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:03:54.060355  498230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:03:54.068818  498230 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:03:54.068896  498230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:03:54.076868  498230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1002 08:03:54.091011  498230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:03:54.110545  498230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1002 08:03:54.125767  498230 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:03:54.129520  498230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:03:54.139599  498230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:03:54.271186  498230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:03:54.287431  498230 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347 for IP: 192.168.85.2
	I1002 08:03:54.287452  498230 certs.go:195] generating shared ca certs ...
	I1002 08:03:54.287472  498230 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:54.287617  498230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:03:54.287666  498230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:03:54.287702  498230 certs.go:257] generating profile certs ...
	I1002 08:03:54.287808  498230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/client.key
	I1002 08:03:54.287886  498230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key.2c92e75c
	I1002 08:03:54.287930  498230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.key
	I1002 08:03:54.288052  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:03:54.288098  498230 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:03:54.288113  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:03:54.288139  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:03:54.288165  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:03:54.288195  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:03:54.288241  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:03:54.288877  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:03:54.317899  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:03:54.340661  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:03:54.361477  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:03:54.385973  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 08:03:54.408011  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:03:54.429903  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:03:54.457880  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 08:03:54.478953  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:03:54.500674  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:03:54.521979  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:03:54.546913  498230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:03:54.560719  498230 ssh_runner.go:195] Run: openssl version
	I1002 08:03:54.566921  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:03:54.580753  498230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:03:54.584951  498230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:03:54.585069  498230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:03:54.629665  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:03:54.638897  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:03:54.655295  498230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:03:54.659118  498230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:03:54.659224  498230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:03:54.701199  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:03:54.709205  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:03:54.717473  498230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:03:54.721274  498230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:03:54.721378  498230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:03:54.762854  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:03:54.771630  498230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:03:54.775631  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 08:03:54.817606  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 08:03:54.859578  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 08:03:54.902472  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 08:03:54.951535  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 08:03:55.024510  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 08:03:55.093941  498230 kubeadm.go:400] StartCluster: {Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:03:55.094085  498230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:03:55.094178  498230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:03:55.174355  498230 cri.go:89] found id: "6f3ca884c1303597bf9de27670995129fac9974f29ec3998eefcb79f460f2608"
	I1002 08:03:55.174427  498230 cri.go:89] found id: "a3295c18de5cd39930de6a29eafc9bfeb208a5f01b6be0d2f865fafae39a8562"
	I1002 08:03:55.174450  498230 cri.go:89] found id: "19e7d5d7bdca5512898a0c121ad4ff851265a3f8cf6c48dddb1e91460e0e5e12"
	I1002 08:03:55.174496  498230 cri.go:89] found id: "85a09c19828ce281864f49326c73b8b58d618d6e28f38bb8d34c435302ffd907"
	I1002 08:03:55.174524  498230 cri.go:89] found id: ""
	I1002 08:03:55.174598  498230 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 08:03:55.196095  498230 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:03:55Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:03:55.196228  498230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:03:55.229826  498230 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 08:03:55.229907  498230 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 08:03:55.229982  498230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 08:03:55.262510  498230 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 08:03:55.263147  498230 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-171347" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:03:55.263431  498230 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-171347" cluster setting kubeconfig missing "embed-certs-171347" context setting]
	I1002 08:03:55.263966  498230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:55.265565  498230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 08:03:55.279166  498230 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 08:03:55.279246  498230 kubeadm.go:601] duration metric: took 49.317772ms to restartPrimaryControlPlane
	I1002 08:03:55.279272  498230 kubeadm.go:402] duration metric: took 185.347432ms to StartCluster
	I1002 08:03:55.279304  498230 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:55.279390  498230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:03:55.280662  498230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:55.280943  498230 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:03:55.281493  498230 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:03:55.281545  498230 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:03:55.281741  498230 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-171347"
	I1002 08:03:55.281795  498230 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-171347"
	W1002 08:03:55.281816  498230 addons.go:247] addon storage-provisioner should already be in state true
	I1002 08:03:55.281857  498230 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:03:55.282483  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:55.282697  498230 addons.go:69] Setting dashboard=true in profile "embed-certs-171347"
	I1002 08:03:55.282739  498230 addons.go:238] Setting addon dashboard=true in "embed-certs-171347"
	W1002 08:03:55.282765  498230 addons.go:247] addon dashboard should already be in state true
	I1002 08:03:55.282816  498230 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:03:55.283122  498230 addons.go:69] Setting default-storageclass=true in profile "embed-certs-171347"
	I1002 08:03:55.283139  498230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-171347"
	I1002 08:03:55.283371  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:55.283698  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:55.286302  498230 out.go:179] * Verifying Kubernetes components...
	I1002 08:03:55.289534  498230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:03:55.330497  498230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:03:55.335323  498230 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:03:55.335345  498230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:03:55.335418  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:55.340351  498230 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 08:03:55.345068  498230 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 08:03:55.351384  498230 addons.go:238] Setting addon default-storageclass=true in "embed-certs-171347"
	W1002 08:03:55.351418  498230 addons.go:247] addon default-storageclass should already be in state true
	I1002 08:03:55.351449  498230 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:03:55.351913  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:55.353065  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 08:03:55.353093  498230 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 08:03:55.353147  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:55.373582  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:55.407651  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:55.412530  498230 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:03:55.412551  498230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:03:55.412612  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:55.451817  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:55.640493  498230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:03:55.678927  498230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:03:55.762901  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 08:03:55.762923  498230 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 08:03:55.765218  498230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:03:55.807814  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 08:03:55.807890  498230 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 08:03:55.852773  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 08:03:55.852853  498230 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 08:03:55.959800  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 08:03:55.959873  498230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 08:03:56.012484  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 08:03:56.012567  498230 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 08:03:56.052324  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 08:03:56.052400  498230 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 08:03:56.083493  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 08:03:56.083594  498230 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 08:03:56.103149  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 08:03:56.103231  498230 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 08:03:56.131128  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:03:56.131204  498230 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 08:03:56.155536  498230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 08:03:54.654211  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	W1002 08:03:56.656714  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	I1002 08:03:58.654288  495337 pod_ready.go:94] pod "coredns-66bc5c9577-74zfp" is "Ready"
	I1002 08:03:58.654318  495337 pod_ready.go:86] duration metric: took 35.506240279s for pod "coredns-66bc5c9577-74zfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.657303  495337 pod_ready.go:83] waiting for pod "etcd-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.666220  495337 pod_ready.go:94] pod "etcd-no-preload-604182" is "Ready"
	I1002 08:03:58.666249  495337 pod_ready.go:86] duration metric: took 8.918318ms for pod "etcd-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.668675  495337 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.672524  495337 pod_ready.go:94] pod "kube-apiserver-no-preload-604182" is "Ready"
	I1002 08:03:58.672547  495337 pod_ready.go:86] duration metric: took 3.851383ms for pod "kube-apiserver-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.674982  495337 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.852730  495337 pod_ready.go:94] pod "kube-controller-manager-no-preload-604182" is "Ready"
	I1002 08:03:58.852808  495337 pod_ready.go:86] duration metric: took 177.764732ms for pod "kube-controller-manager-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:59.052005  495337 pod_ready.go:83] waiting for pod "kube-proxy-qn6pp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:59.451880  495337 pod_ready.go:94] pod "kube-proxy-qn6pp" is "Ready"
	I1002 08:03:59.451949  495337 pod_ready.go:86] duration metric: took 399.865487ms for pod "kube-proxy-qn6pp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:59.652429  495337 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:00.084547  495337 pod_ready.go:94] pod "kube-scheduler-no-preload-604182" is "Ready"
	I1002 08:04:00.084580  495337 pod_ready.go:86] duration metric: took 432.111742ms for pod "kube-scheduler-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:00.084606  495337 pod_ready.go:40] duration metric: took 36.941919729s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:04:00.356423  495337 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:04:00.379203  495337 out.go:179] * Done! kubectl is now configured to use "no-preload-604182" cluster and "default" namespace by default
	I1002 08:04:02.810985  498230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.17044965s)
	I1002 08:04:02.811040  498230 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.13208905s)
	I1002 08:04:02.811068  498230 node_ready.go:35] waiting up to 6m0s for node "embed-certs-171347" to be "Ready" ...
	I1002 08:04:02.811385  498230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.046144537s)
	I1002 08:04:02.853548  498230 node_ready.go:49] node "embed-certs-171347" is "Ready"
	I1002 08:04:02.853628  498230 node_ready.go:38] duration metric: took 42.547133ms for node "embed-certs-171347" to be "Ready" ...
	I1002 08:04:02.853755  498230 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:04:02.853871  498230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:04:02.948386  498230 api_server.go:72] duration metric: took 7.667380746s to wait for apiserver process to appear ...
	I1002 08:04:02.948458  498230 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:04:02.948493  498230 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 08:04:02.948710  498230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.793098685s)
	I1002 08:04:02.952052  498230 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-171347 addons enable metrics-server
	
	I1002 08:04:02.954957  498230 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 08:04:02.957881  498230 addons.go:514] duration metric: took 7.676326953s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 08:04:02.961604  498230 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 08:04:02.962755  498230 api_server.go:141] control plane version: v1.34.1
	I1002 08:04:02.962773  498230 api_server.go:131] duration metric: took 14.294458ms to wait for apiserver health ...
	I1002 08:04:02.962782  498230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:04:02.971031  498230 system_pods.go:59] 8 kube-system pods found
	I1002 08:04:02.971137  498230 system_pods.go:61] "coredns-66bc5c9577-h88d8" [2f1ec40b-c756-4c21-b68c-293d99715917] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:04:02.971164  498230 system_pods.go:61] "etcd-embed-certs-171347" [926ce91c-0431-4ba1-b44e-fffbf0775a3b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:04:02.971201  498230 system_pods.go:61] "kindnet-q6rpr" [debb56b0-5037-4c8f-83f9-277929580103] Running
	I1002 08:04:02.971231  498230 system_pods.go:61] "kube-apiserver-embed-certs-171347" [e47c2d75-962d-4fcc-b386-ca8894e72519] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:04:02.971258  498230 system_pods.go:61] "kube-controller-manager-embed-certs-171347" [d161f53c-5955-4fee-b51b-766596a6970c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:04:02.971298  498230 system_pods.go:61] "kube-proxy-jzmxf" [0bb71089-73b5-4b6c-92cd-0c4ba1aee456] Running
	I1002 08:04:02.971329  498230 system_pods.go:61] "kube-scheduler-embed-certs-171347" [8fbc6745-47c9-43ca-af46-4746f82e41f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:04:02.971355  498230 system_pods.go:61] "storage-provisioner" [b206ffb9-0004-486d-98ff-d23a63b69555] Running
	I1002 08:04:02.971389  498230 system_pods.go:74] duration metric: took 8.598102ms to wait for pod list to return data ...
	I1002 08:04:02.971416  498230 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:04:02.977526  498230 default_sa.go:45] found service account: "default"
	I1002 08:04:02.977603  498230 default_sa.go:55] duration metric: took 6.168262ms for default service account to be created ...
	I1002 08:04:02.977628  498230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:04:02.981608  498230 system_pods.go:86] 8 kube-system pods found
	I1002 08:04:02.981688  498230 system_pods.go:89] "coredns-66bc5c9577-h88d8" [2f1ec40b-c756-4c21-b68c-293d99715917] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:04:02.981713  498230 system_pods.go:89] "etcd-embed-certs-171347" [926ce91c-0431-4ba1-b44e-fffbf0775a3b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:04:02.981752  498230 system_pods.go:89] "kindnet-q6rpr" [debb56b0-5037-4c8f-83f9-277929580103] Running
	I1002 08:04:02.981781  498230 system_pods.go:89] "kube-apiserver-embed-certs-171347" [e47c2d75-962d-4fcc-b386-ca8894e72519] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:04:02.981805  498230 system_pods.go:89] "kube-controller-manager-embed-certs-171347" [d161f53c-5955-4fee-b51b-766596a6970c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:04:02.981842  498230 system_pods.go:89] "kube-proxy-jzmxf" [0bb71089-73b5-4b6c-92cd-0c4ba1aee456] Running
	I1002 08:04:02.981872  498230 system_pods.go:89] "kube-scheduler-embed-certs-171347" [8fbc6745-47c9-43ca-af46-4746f82e41f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:04:02.981897  498230 system_pods.go:89] "storage-provisioner" [b206ffb9-0004-486d-98ff-d23a63b69555] Running
	I1002 08:04:02.981937  498230 system_pods.go:126] duration metric: took 4.28671ms to wait for k8s-apps to be running ...
	I1002 08:04:02.981964  498230 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:04:02.982052  498230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:04:02.999501  498230 system_svc.go:56] duration metric: took 17.528046ms WaitForService to wait for kubelet
	I1002 08:04:02.999575  498230 kubeadm.go:586] duration metric: took 7.718574301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:04:02.999627  498230 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:04:03.009383  498230 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:04:03.009487  498230 node_conditions.go:123] node cpu capacity is 2
	I1002 08:04:03.009519  498230 node_conditions.go:105] duration metric: took 9.87077ms to run NodePressure ...
	I1002 08:04:03.009563  498230 start.go:241] waiting for startup goroutines ...
	I1002 08:04:03.009589  498230 start.go:246] waiting for cluster config update ...
	I1002 08:04:03.009617  498230 start.go:255] writing updated cluster config ...
	I1002 08:04:03.010049  498230 ssh_runner.go:195] Run: rm -f paused
	I1002 08:04:03.014702  498230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:04:03.074165  498230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h88d8" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 08:04:05.129445  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:07.594768  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:10.082113  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:12.085671  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.583960275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.62519821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.66051678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.702469692Z" level=info msg="Created container 3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq/dashboard-metrics-scraper" id=ff2b9d97-77a6-4d8c-9e40-2cbeefb45054 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.72008368Z" level=info msg="Starting container: 3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea" id=3733b45f-4e4c-48f2-b11e-056c4fa0fdb2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.734948563Z" level=info msg="Started container" PID=1633 containerID=3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq/dashboard-metrics-scraper id=3733b45f-4e4c-48f2-b11e-056c4fa0fdb2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9258d27863ab661daf971138271a561550cd298af24cac74d296aecbe1931594
	Oct 02 08:04:00 no-preload-604182 conmon[1631]: conmon 3810207ffcd2ec126a0d <ninfo>: container 1633 exited with status 1
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.828981767Z" level=info msg="Removing container: 2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957" id=b5bb2773-a9ab-4978-ba5a-08317395ebad name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.843420013Z" level=info msg="Error loading conmon cgroup of container 2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957: cgroup deleted" id=b5bb2773-a9ab-4978-ba5a-08317395ebad name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.848130776Z" level=info msg="Removed container 2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq/dashboard-metrics-scraper" id=b5bb2773-a9ab-4978-ba5a-08317395ebad name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.505911663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.510733074Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.510764631Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.510785095Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.515521713Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.515675848Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.515760887Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.519848604Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.520011814Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.520081763Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.529326074Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.529487528Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.529576185Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.535904744Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.536098879Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3810207ffcd2e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   9258d27863ab6       dashboard-metrics-scraper-6ffb444bf9-z9xnq   kubernetes-dashboard
	cf91d8320e31d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           23 seconds ago      Running             storage-provisioner         2                   02c3f20713407       storage-provisioner                          kube-system
	7085a5c11d906       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago      Running             kubernetes-dashboard        0                   891e06acb5da5       kubernetes-dashboard-855c9754f9-dmlvr        kubernetes-dashboard
	c8704e5c1cedb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago      Running             coredns                     1                   53c917b65991e       coredns-66bc5c9577-74zfp                     kube-system
	5891601f179a9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago      Running             busybox                     1                   ecdfd6640e7a4       busybox                                      default
	941fe1e375ab8       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago      Exited              storage-provisioner         1                   02c3f20713407       storage-provisioner                          kube-system
	149d1489fe735       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago      Running             kube-proxy                  1                   a7afc74a47b50       kube-proxy-qn6pp                             kube-system
	345b977a1a5ae       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago      Running             kindnet-cni                 1                   f34c6133c7796       kindnet-5zjv7                                kube-system
	4164431db5f86       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   670f209653373       etcd-no-preload-604182                       kube-system
	3e1fc7a1946e3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   92398963260b1       kube-apiserver-no-preload-604182             kube-system
	77029f6aa5b62       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   7e33fca238475       kube-controller-manager-no-preload-604182    kube-system
	3094807a90d6d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   d50d3665a574e       kube-scheduler-no-preload-604182             kube-system
	
	
	==> coredns [c8704e5c1cedb8c825267c9042fc932867f91cfb0f2a0998dac40e8955311969] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58507 - 45460 "HINFO IN 6588191415132121994.7007080049064169055. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005424731s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-604182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-604182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=no-preload-604182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_02_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-604182
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:04:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:04:12 +0000   Thu, 02 Oct 2025 08:02:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:04:12 +0000   Thu, 02 Oct 2025 08:02:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:04:12 +0000   Thu, 02 Oct 2025 08:02:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:04:12 +0000   Thu, 02 Oct 2025 08:02:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-604182
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 701c78548bff440eb2e4480981a54c06
	  System UUID:                65f354cd-b030-437d-9beb-12ea491c6172
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-74zfp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-604182                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-5zjv7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-604182              250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-604182     200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-qn6pp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-604182              100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-z9xnq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dmlvr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 52s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node no-preload-604182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node no-preload-604182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node no-preload-604182 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    115s                 kubelet          Node no-preload-604182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s                 kubelet          Node no-preload-604182 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  115s                 kubelet          Node no-preload-604182 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           111s                 node-controller  Node no-preload-604182 event: Registered Node no-preload-604182 in Controller
	  Normal   NodeReady                94s                  kubelet          Node no-preload-604182 status is now: NodeReady
	  Normal   Starting                 61s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node no-preload-604182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node no-preload-604182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node no-preload-604182 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                  node-controller  Node no-preload-604182 event: Registered Node no-preload-604182 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4164431db5f8614c900dab52a55fbc230192e5350939fe5d0d56bfc4b9f37616] <==
	{"level":"warn","ts":"2025-10-02T08:03:19.050035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.073550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.107736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.123663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.144502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.169834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.179612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.200281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.221741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.260395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.281619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.291940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.308143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.337391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.355933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.396999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.439520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.478424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.496259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.532138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.542153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.588065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.615869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.639537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.789256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53496","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:04:16 up  2:46,  0 user,  load average: 5.04, 3.24, 2.27
	Linux no-preload-604182 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [345b977a1a5ae88c18319ef442b740dfcd5b6f2cff29fd84e21439458e7a131c] <==
	I1002 08:03:22.237961       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:03:22.299312       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 08:03:22.299563       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:03:22.299615       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:03:22.299657       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:03:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:03:22.505046       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:03:22.505075       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:03:22.505094       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:03:22.510996       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:03:52.506015       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 08:03:52.506156       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 08:03:52.511645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 08:03:52.511758       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1002 08:03:53.805860       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:03:53.805896       1 metrics.go:72] Registering metrics
	I1002 08:03:53.805961       1 controller.go:711] "Syncing nftables rules"
	I1002 08:04:02.505646       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:04:02.505698       1 main.go:301] handling current node
	I1002 08:04:12.508052       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:04:12.508104       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e1fc7a1946e3a39d39fe7e56e659a01f9a77a1b064829ae68f8e7533e1798bc] <==
	I1002 08:03:21.170266       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 08:03:21.170275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 08:03:21.203418       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 08:03:21.223621       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 08:03:21.224184       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 08:03:21.224365       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 08:03:21.224373       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 08:03:21.224471       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 08:03:21.260545       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:03:21.270848       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 08:03:21.281492       1 cache.go:39] Caches are synced for autoregister controller
	E1002 08:03:21.303978       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 08:03:21.351134       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 08:03:21.351251       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 08:03:21.573030       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:03:21.630178       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:03:22.170937       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:03:22.308695       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:03:22.398670       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:03:22.442814       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:03:22.564420       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.53.77"}
	I1002 08:03:22.616775       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.133.8"}
	I1002 08:03:25.648125       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 08:03:25.776558       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:03:25.803696       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [77029f6aa5b6233463612c47bb436aebdb6578cbd16ee091398e61c2c07d4608] <==
	I1002 08:03:25.219200       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 08:03:25.219242       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 08:03:25.219378       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 08:03:25.223146       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 08:03:25.224362       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 08:03:25.226669       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:03:25.226745       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 08:03:25.231501       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 08:03:25.232586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 08:03:25.233835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 08:03:25.235110       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 08:03:25.240945       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 08:03:25.241143       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 08:03:25.241776       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 08:03:25.242749       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 08:03:25.243047       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 08:03:25.243719       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 08:03:25.247869       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 08:03:25.248015       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-604182"
	I1002 08:03:25.248069       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 08:03:25.248121       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:03:25.259339       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 08:03:25.266283       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 08:03:25.268689       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 08:03:25.279962       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [149d1489fe735da77b26aff3ec794c7c79ef2de589160921415fb965adcead0f] <==
	I1002 08:03:22.625279       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:03:23.244671       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:03:23.353944       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:03:23.355093       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 08:03:23.355172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:03:23.402345       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:03:23.402410       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:03:23.407345       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:03:23.407712       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:03:23.407895       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:03:23.409117       1 config.go:200] "Starting service config controller"
	I1002 08:03:23.409182       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:03:23.409238       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:03:23.409268       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:03:23.409303       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:03:23.409328       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:03:23.410056       1 config.go:309] "Starting node config controller"
	I1002 08:03:23.410130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:03:23.410162       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:03:23.515365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:03:23.515410       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:03:23.515454       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3094807a90d6dcd41655425e2f8000995d5181c4b8e85810c853b4db03b96dc4] <==
	I1002 08:03:21.745637       1 serving.go:386] Generated self-signed cert in-memory
	I1002 08:03:24.870945       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 08:03:24.870985       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:03:24.879372       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 08:03:24.879484       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 08:03:24.879555       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:03:24.879588       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:03:24.879638       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:03:24.879709       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:03:24.879825       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:03:24.879950       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:03:24.979824       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 08:03:24.979824       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:03:24.979858       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 08:03:25 no-preload-604182 kubelet[769]: I1002 08:03:25.909996     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzk4r\" (UniqueName: \"kubernetes.io/projected/51c88494-20e6-4c12-ba1b-f8f2acc204ee-kube-api-access-fzk4r\") pod \"dashboard-metrics-scraper-6ffb444bf9-z9xnq\" (UID: \"51c88494-20e6-4c12-ba1b-f8f2acc204ee\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq"
	Oct 02 08:03:25 no-preload-604182 kubelet[769]: I1002 08:03:25.910022     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7a97e796-8ad8-47b3-8086-2f9a8da34762-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dmlvr\" (UID: \"7a97e796-8ad8-47b3-8086-2f9a8da34762\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dmlvr"
	Oct 02 08:03:25 no-preload-604182 kubelet[769]: I1002 08:03:25.910042     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/51c88494-20e6-4c12-ba1b-f8f2acc204ee-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-z9xnq\" (UID: \"51c88494-20e6-4c12-ba1b-f8f2acc204ee\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq"
	Oct 02 08:03:26 no-preload-604182 kubelet[769]: W1002 08:03:26.101899     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/crio-9258d27863ab661daf971138271a561550cd298af24cac74d296aecbe1931594 WatchSource:0}: Error finding container 9258d27863ab661daf971138271a561550cd298af24cac74d296aecbe1931594: Status 404 returned error can't find the container with id 9258d27863ab661daf971138271a561550cd298af24cac74d296aecbe1931594
	Oct 02 08:03:26 no-preload-604182 kubelet[769]: W1002 08:03:26.102345     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/crio-891e06acb5da58b5ca866aaede14f35fd7f71916a1cdb3d93394e45a27df9845 WatchSource:0}: Error finding container 891e06acb5da58b5ca866aaede14f35fd7f71916a1cdb3d93394e45a27df9845: Status 404 returned error can't find the container with id 891e06acb5da58b5ca866aaede14f35fd7f71916a1cdb3d93394e45a27df9845
	Oct 02 08:03:28 no-preload-604182 kubelet[769]: I1002 08:03:28.181961     769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 08:03:37 no-preload-604182 kubelet[769]: I1002 08:03:37.769175     769 scope.go:117] "RemoveContainer" containerID="badde932c30f4c93c898313fb350420a0312ad10e6a9ab4acc18ce74368761ff"
	Oct 02 08:03:37 no-preload-604182 kubelet[769]: I1002 08:03:37.788761     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dmlvr" podStartSLOduration=7.824294946 podStartE2EDuration="12.788742572s" podCreationTimestamp="2025-10-02 08:03:25 +0000 UTC" firstStartedPulling="2025-10-02 08:03:26.106708946 +0000 UTC m=+10.754949390" lastFinishedPulling="2025-10-02 08:03:31.071156563 +0000 UTC m=+15.719397016" observedRunningTime="2025-10-02 08:03:31.817011262 +0000 UTC m=+16.465251789" watchObservedRunningTime="2025-10-02 08:03:37.788742572 +0000 UTC m=+22.436983016"
	Oct 02 08:03:38 no-preload-604182 kubelet[769]: I1002 08:03:38.774470     769 scope.go:117] "RemoveContainer" containerID="badde932c30f4c93c898313fb350420a0312ad10e6a9ab4acc18ce74368761ff"
	Oct 02 08:03:38 no-preload-604182 kubelet[769]: I1002 08:03:38.775605     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:03:38 no-preload-604182 kubelet[769]: E1002 08:03:38.775933     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:03:39 no-preload-604182 kubelet[769]: I1002 08:03:39.778618     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:03:39 no-preload-604182 kubelet[769]: E1002 08:03:39.778777     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:03:46 no-preload-604182 kubelet[769]: I1002 08:03:46.067584     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:03:46 no-preload-604182 kubelet[769]: E1002 08:03:46.067828     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:03:52 no-preload-604182 kubelet[769]: I1002 08:03:52.807715     769 scope.go:117] "RemoveContainer" containerID="941fe1e375ab8b5c7819755f3d2feb5bcdaf2abeb7390f95036f174f13325d9f"
	Oct 02 08:04:00 no-preload-604182 kubelet[769]: I1002 08:04:00.549102     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:04:00 no-preload-604182 kubelet[769]: I1002 08:04:00.827832     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:04:01 no-preload-604182 kubelet[769]: I1002 08:04:01.831629     769 scope.go:117] "RemoveContainer" containerID="3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea"
	Oct 02 08:04:01 no-preload-604182 kubelet[769]: E1002 08:04:01.831810     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:04:06 no-preload-604182 kubelet[769]: I1002 08:04:06.067450     769 scope.go:117] "RemoveContainer" containerID="3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea"
	Oct 02 08:04:06 no-preload-604182 kubelet[769]: E1002 08:04:06.067679     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:04:13 no-preload-604182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:04:13 no-preload-604182 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:04:13 no-preload-604182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7085a5c11d9068aa1530ac0f9c639bae1b8214bbc8ae69419c1885816bfc2422] <==
	2025/10/02 08:03:31 Using namespace: kubernetes-dashboard
	2025/10/02 08:03:31 Using in-cluster config to connect to apiserver
	2025/10/02 08:03:31 Using secret token for csrf signing
	2025/10/02 08:03:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 08:03:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 08:03:31 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 08:03:31 Generating JWE encryption key
	2025/10/02 08:03:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 08:03:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 08:03:33 Initializing JWE encryption key from synchronized object
	2025/10/02 08:03:33 Creating in-cluster Sidecar client
	2025/10/02 08:03:33 Serving insecurely on HTTP port: 9090
	2025/10/02 08:03:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:04:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:03:31 Starting overwatch
	
	
	==> storage-provisioner [941fe1e375ab8b5c7819755f3d2feb5bcdaf2abeb7390f95036f174f13325d9f] <==
	I1002 08:03:22.267922       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 08:03:52.269497       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cf91d8320e31d1cdb4432930b9af5cdeb2b936e99b90b8a85fab2f65fd803d34] <==
	I1002 08:03:52.894268       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 08:03:52.907257       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:03:52.907365       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 08:03:52.909637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:56.364561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:00.640682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:04.238886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:07.293609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:10.315594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:10.320785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:04:10.320939       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:04:10.321121       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-604182_2645f5b0-8a9a-4e1b-b4b3-cbb2009532e8!
	I1002 08:04:10.321852       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce44b2f7-3b72-4264-8678-b29a955c98d3", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-604182_2645f5b0-8a9a-4e1b-b4b3-cbb2009532e8 became leader
	W1002 08:04:10.326303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:10.333017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:04:10.422131       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-604182_2645f5b0-8a9a-4e1b-b4b3-cbb2009532e8!
	W1002 08:04:12.343421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:12.358559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:14.362487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:14.367348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:16.371032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:16.383670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-604182 -n no-preload-604182
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-604182 -n no-preload-604182: exit status 2 (459.766546ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-604182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-604182
helpers_test.go:243: (dbg) docker inspect no-preload-604182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd",
	        "Created": "2025-10-02T08:01:27.464953821Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495462,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:03:08.837901343Z",
	            "FinishedAt": "2025-10-02T08:03:08.014060494Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/hosts",
	        "LogPath": "/var/lib/docker/containers/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd-json.log",
	        "Name": "/no-preload-604182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-604182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-604182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd",
	                "LowerDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16b601c8b3476133a497e1d1758975b5ed20ca2deca3a8c241f50363fd47c895/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-604182",
	                "Source": "/var/lib/docker/volumes/no-preload-604182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-604182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-604182",
	                "name.minikube.sigs.k8s.io": "no-preload-604182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4748f59ce4b03d04f32b8bbf44aa9636009784c2190ef9b48c166c098d23ff4b",
	            "SandboxKey": "/var/run/docker/netns/4748f59ce4b0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-604182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:5d:bd:80:5a:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b49b2bd463034ec68025fea3957066414ae3acd9986e1db0b657dcf84796d697",
	                    "EndpointID": "7aa11ca2f66453e398e22eb741b563208c0a05a7d810b21364396a012be4e426",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-604182",
	                        "eb7634b68495"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-604182 -n no-preload-604182
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-604182 -n no-preload-604182: exit status 2 (459.0853ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-604182 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-604182 logs -n 25: (1.335274526s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-654417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-654417    │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ delete  │ -p cert-options-654417                                                                                                                                                                                                                        │ cert-options-654417    │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:58 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 07:58 UTC │ 02 Oct 25 07:59 UTC │
	│ start   │ -p cert-expiration-759246 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-759246 │ jenkins │ v1.37.0 │ 02 Oct 25 07:59 UTC │ 02 Oct 25 08:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-356986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │                     │
	│ stop    │ -p old-k8s-version-356986 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-356986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:01 UTC │
	│ image   │ old-k8s-version-356986 image list --format=json                                                                                                                                                                                               │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ pause   │ -p old-k8s-version-356986 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │                     │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:02 UTC │
	│ delete  │ -p cert-expiration-759246                                                                                                                                                                                                                     │ cert-expiration-759246 │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │                     │
	│ stop    │ -p no-preload-604182 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p no-preload-604182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ stop    │ -p embed-certs-171347 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-171347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347     │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ image   │ no-preload-604182 image list --format=json                                                                                                                                                                                                    │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182      │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:03:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:03:47.207337  498230 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:03:47.207477  498230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:03:47.207490  498230 out.go:374] Setting ErrFile to fd 2...
	I1002 08:03:47.207495  498230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:03:47.207782  498230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:03:47.208239  498230 out.go:368] Setting JSON to false
	I1002 08:03:47.209341  498230 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9979,"bootTime":1759382249,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:03:47.209420  498230 start.go:140] virtualization:  
	I1002 08:03:47.212701  498230 out.go:179] * [embed-certs-171347] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:03:47.219232  498230 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:03:47.219269  498230 notify.go:220] Checking for updates...
	I1002 08:03:47.225166  498230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:03:47.228208  498230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:03:47.231242  498230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:03:47.234200  498230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:03:47.237078  498230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:03:47.240540  498230 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:03:47.241242  498230 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:03:47.272017  498230 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:03:47.272137  498230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:03:47.347915  498230 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:03:47.337966033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:03:47.348032  498230 docker.go:318] overlay module found
	I1002 08:03:47.351239  498230 out.go:179] * Using the docker driver based on existing profile
	I1002 08:03:47.354068  498230 start.go:304] selected driver: docker
	I1002 08:03:47.354090  498230 start.go:924] validating driver "docker" against &{Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:03:47.354196  498230 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:03:47.354931  498230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:03:47.405699  498230 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:03:47.395896446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:03:47.406045  498230 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:03:47.406083  498230 cni.go:84] Creating CNI manager for ""
	I1002 08:03:47.406151  498230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:03:47.406201  498230 start.go:348] cluster config:
	{Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:03:47.409439  498230 out.go:179] * Starting "embed-certs-171347" primary control-plane node in "embed-certs-171347" cluster
	I1002 08:03:47.412198  498230 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:03:47.415193  498230 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:03:47.418082  498230 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:03:47.418153  498230 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:03:47.418159  498230 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:03:47.418167  498230 cache.go:58] Caching tarball of preloaded images
	I1002 08:03:47.418360  498230 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:03:47.418371  498230 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:03:47.418487  498230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/config.json ...
	I1002 08:03:47.437944  498230 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:03:47.437970  498230 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:03:47.437988  498230 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:03:47.438011  498230 start.go:360] acquireMachinesLock for embed-certs-171347: {Name:mk251fc9b359c61a60beaff4e6d636acffa89ca4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:03:47.438086  498230 start.go:364] duration metric: took 37.638µs to acquireMachinesLock for "embed-certs-171347"
	I1002 08:03:47.438114  498230 start.go:96] Skipping create...Using existing machine configuration
	I1002 08:03:47.438128  498230 fix.go:54] fixHost starting: 
	I1002 08:03:47.438414  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:47.455405  498230 fix.go:112] recreateIfNeeded on embed-certs-171347: state=Stopped err=<nil>
	W1002 08:03:47.455436  498230 fix.go:138] unexpected machine state, will restart: <nil>
	W1002 08:03:43.653584  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	W1002 08:03:45.654397  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	W1002 08:03:47.655009  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	I1002 08:03:47.458713  498230 out.go:252] * Restarting existing docker container for "embed-certs-171347" ...
	I1002 08:03:47.458799  498230 cli_runner.go:164] Run: docker start embed-certs-171347
	I1002 08:03:47.730694  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:47.753582  498230 kic.go:430] container "embed-certs-171347" state is running.
	I1002 08:03:47.754232  498230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-171347
	I1002 08:03:47.777348  498230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/config.json ...
	I1002 08:03:47.778175  498230 machine.go:93] provisionDockerMachine start ...
	I1002 08:03:47.778336  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:47.803620  498230 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:47.803958  498230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1002 08:03:47.803975  498230 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:03:47.804864  498230 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 08:03:50.942726  498230 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-171347
	
	I1002 08:03:50.942750  498230 ubuntu.go:182] provisioning hostname "embed-certs-171347"
	I1002 08:03:50.942815  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:50.960573  498230 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:50.960888  498230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1002 08:03:50.960911  498230 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-171347 && echo "embed-certs-171347" | sudo tee /etc/hostname
	I1002 08:03:51.110053  498230 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-171347
	
	I1002 08:03:51.110163  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:51.128495  498230 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:51.128911  498230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1002 08:03:51.128936  498230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-171347' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-171347/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-171347' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:03:51.271558  498230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:03:51.271590  498230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:03:51.271610  498230 ubuntu.go:190] setting up certificates
	I1002 08:03:51.271620  498230 provision.go:84] configureAuth start
	I1002 08:03:51.271679  498230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-171347
	I1002 08:03:51.288858  498230 provision.go:143] copyHostCerts
	I1002 08:03:51.288930  498230 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:03:51.288952  498230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:03:51.289029  498230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:03:51.289155  498230 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:03:51.289167  498230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:03:51.289197  498230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:03:51.289263  498230 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:03:51.289273  498230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:03:51.289298  498230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:03:51.289359  498230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.embed-certs-171347 san=[127.0.0.1 192.168.85.2 embed-certs-171347 localhost minikube]
	I1002 08:03:51.625001  498230 provision.go:177] copyRemoteCerts
	I1002 08:03:51.625104  498230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:03:51.625162  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:51.645259  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:51.747471  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 08:03:51.766365  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 08:03:51.786274  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:03:51.806005  498230 provision.go:87] duration metric: took 534.350467ms to configureAuth
	I1002 08:03:51.806086  498230 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:03:51.806346  498230 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:03:51.806547  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:51.825037  498230 main.go:141] libmachine: Using SSH client type: native
	I1002 08:03:51.825352  498230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1002 08:03:51.825366  498230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:03:52.144044  498230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:03:52.144144  498230 machine.go:96] duration metric: took 4.365941285s to provisionDockerMachine
	I1002 08:03:52.144172  498230 start.go:293] postStartSetup for "embed-certs-171347" (driver="docker")
	I1002 08:03:52.144226  498230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:03:52.144326  498230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:03:52.144400  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:52.166894  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	W1002 08:03:50.153345  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	W1002 08:03:52.158161  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	I1002 08:03:52.267790  498230 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:03:52.271691  498230 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:03:52.271719  498230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:03:52.271729  498230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:03:52.271788  498230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:03:52.271865  498230 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:03:52.271969  498230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:03:52.282671  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:03:52.302467  498230 start.go:296] duration metric: took 158.246611ms for postStartSetup
	I1002 08:03:52.302579  498230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:03:52.302643  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:52.320046  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:52.413345  498230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:03:52.418398  498230 fix.go:56] duration metric: took 4.980271082s for fixHost
	I1002 08:03:52.418422  498230 start.go:83] releasing machines lock for "embed-certs-171347", held for 4.980327674s
	I1002 08:03:52.418499  498230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-171347
	I1002 08:03:52.436540  498230 ssh_runner.go:195] Run: cat /version.json
	I1002 08:03:52.436592  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:52.436614  498230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:03:52.436677  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:52.455003  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:52.456630  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:52.644749  498230 ssh_runner.go:195] Run: systemctl --version
	I1002 08:03:52.653090  498230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:03:52.690194  498230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:03:52.695547  498230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:03:52.695623  498230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:03:52.703635  498230 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 08:03:52.703661  498230 start.go:495] detecting cgroup driver to use...
	I1002 08:03:52.703724  498230 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:03:52.703799  498230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:03:52.718972  498230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:03:52.732418  498230 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:03:52.732538  498230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:03:52.748515  498230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:03:52.762615  498230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:03:52.930574  498230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:03:53.062910  498230 docker.go:234] disabling docker service ...
	I1002 08:03:53.063005  498230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:03:53.079302  498230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:03:53.094329  498230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:03:53.232528  498230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:03:53.350952  498230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:03:53.369946  498230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:03:53.386419  498230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:03:53.386553  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.396313  498230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:03:53.396436  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.406298  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.416137  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.433425  498230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:03:53.447908  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.457888  498230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.467856  498230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:03:53.477269  498230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:03:53.485467  498230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:03:53.495744  498230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:03:53.616859  498230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:03:53.765616  498230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:03:53.765730  498230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:03:53.769772  498230 start.go:563] Will wait 60s for crictl version
	I1002 08:03:53.769847  498230 ssh_runner.go:195] Run: which crictl
	I1002 08:03:53.773631  498230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:03:53.802821  498230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:03:53.802921  498230 ssh_runner.go:195] Run: crio --version
	I1002 08:03:53.845591  498230 ssh_runner.go:195] Run: crio --version
	I1002 08:03:53.886539  498230 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 08:03:53.889395  498230 cli_runner.go:164] Run: docker network inspect embed-certs-171347 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:03:53.905164  498230 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 08:03:53.909166  498230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:03:53.919272  498230 kubeadm.go:883] updating cluster {Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:03:53.919385  498230 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:03:53.919446  498230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:03:53.961622  498230 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:03:53.961647  498230 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:03:53.961710  498230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:03:53.989571  498230 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:03:53.989600  498230 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:03:53.989609  498230 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 08:03:53.989766  498230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-171347 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:03:53.989863  498230 ssh_runner.go:195] Run: crio config
	I1002 08:03:54.060074  498230 cni.go:84] Creating CNI manager for ""
	I1002 08:03:54.060098  498230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:03:54.060111  498230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:03:54.060135  498230 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-171347 NodeName:embed-certs-171347 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:03:54.060272  498230 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-171347"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:03:54.060355  498230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:03:54.068818  498230 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:03:54.068896  498230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:03:54.076868  498230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1002 08:03:54.091011  498230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:03:54.110545  498230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1002 08:03:54.125767  498230 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:03:54.129520  498230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:03:54.139599  498230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:03:54.271186  498230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:03:54.287431  498230 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347 for IP: 192.168.85.2
	I1002 08:03:54.287452  498230 certs.go:195] generating shared ca certs ...
	I1002 08:03:54.287472  498230 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:54.287617  498230 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:03:54.287666  498230 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:03:54.287702  498230 certs.go:257] generating profile certs ...
	I1002 08:03:54.287808  498230 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/client.key
	I1002 08:03:54.287886  498230 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key.2c92e75c
	I1002 08:03:54.287930  498230 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.key
	I1002 08:03:54.288052  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:03:54.288098  498230 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:03:54.288113  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:03:54.288139  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:03:54.288165  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:03:54.288195  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:03:54.288241  498230 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:03:54.288877  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:03:54.317899  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:03:54.340661  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:03:54.361477  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:03:54.385973  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 08:03:54.408011  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:03:54.429903  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:03:54.457880  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/embed-certs-171347/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 08:03:54.478953  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:03:54.500674  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:03:54.521979  498230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:03:54.546913  498230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:03:54.560719  498230 ssh_runner.go:195] Run: openssl version
	I1002 08:03:54.566921  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:03:54.580753  498230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:03:54.584951  498230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:03:54.585069  498230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:03:54.629665  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:03:54.638897  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:03:54.655295  498230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:03:54.659118  498230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:03:54.659224  498230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:03:54.701199  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:03:54.709205  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:03:54.717473  498230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:03:54.721274  498230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:03:54.721378  498230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:03:54.762854  498230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:03:54.771630  498230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:03:54.775631  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 08:03:54.817606  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 08:03:54.859578  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 08:03:54.902472  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 08:03:54.951535  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 08:03:55.024510  498230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 08:03:55.093941  498230 kubeadm.go:400] StartCluster: {Name:embed-certs-171347 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-171347 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:03:55.094085  498230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:03:55.094178  498230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:03:55.174355  498230 cri.go:89] found id: "6f3ca884c1303597bf9de27670995129fac9974f29ec3998eefcb79f460f2608"
	I1002 08:03:55.174427  498230 cri.go:89] found id: "a3295c18de5cd39930de6a29eafc9bfeb208a5f01b6be0d2f865fafae39a8562"
	I1002 08:03:55.174450  498230 cri.go:89] found id: "19e7d5d7bdca5512898a0c121ad4ff851265a3f8cf6c48dddb1e91460e0e5e12"
	I1002 08:03:55.174496  498230 cri.go:89] found id: "85a09c19828ce281864f49326c73b8b58d618d6e28f38bb8d34c435302ffd907"
	I1002 08:03:55.174524  498230 cri.go:89] found id: ""
	I1002 08:03:55.174598  498230 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 08:03:55.196095  498230 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:03:55Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:03:55.196228  498230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:03:55.229826  498230 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 08:03:55.229907  498230 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 08:03:55.229982  498230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 08:03:55.262510  498230 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 08:03:55.263147  498230 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-171347" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:03:55.263431  498230 kubeconfig.go:62] /home/jenkins/minikube-integration/21643-292504/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-171347" cluster setting kubeconfig missing "embed-certs-171347" context setting]
	I1002 08:03:55.263966  498230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:55.265565  498230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 08:03:55.279166  498230 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 08:03:55.279246  498230 kubeadm.go:601] duration metric: took 49.317772ms to restartPrimaryControlPlane
	I1002 08:03:55.279272  498230 kubeadm.go:402] duration metric: took 185.347432ms to StartCluster
	I1002 08:03:55.279304  498230 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:55.279390  498230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:03:55.280662  498230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:03:55.280943  498230 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:03:55.281493  498230 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:03:55.281545  498230 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:03:55.281741  498230 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-171347"
	I1002 08:03:55.281795  498230 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-171347"
	W1002 08:03:55.281816  498230 addons.go:247] addon storage-provisioner should already be in state true
	I1002 08:03:55.281857  498230 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:03:55.282483  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:55.282697  498230 addons.go:69] Setting dashboard=true in profile "embed-certs-171347"
	I1002 08:03:55.282739  498230 addons.go:238] Setting addon dashboard=true in "embed-certs-171347"
	W1002 08:03:55.282765  498230 addons.go:247] addon dashboard should already be in state true
	I1002 08:03:55.282816  498230 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:03:55.283122  498230 addons.go:69] Setting default-storageclass=true in profile "embed-certs-171347"
	I1002 08:03:55.283139  498230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-171347"
	I1002 08:03:55.283371  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:55.283698  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:55.286302  498230 out.go:179] * Verifying Kubernetes components...
	I1002 08:03:55.289534  498230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:03:55.330497  498230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:03:55.335323  498230 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:03:55.335345  498230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:03:55.335418  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:55.340351  498230 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 08:03:55.345068  498230 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 08:03:55.351384  498230 addons.go:238] Setting addon default-storageclass=true in "embed-certs-171347"
	W1002 08:03:55.351418  498230 addons.go:247] addon default-storageclass should already be in state true
	I1002 08:03:55.351449  498230 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:03:55.351913  498230 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:03:55.353065  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 08:03:55.353093  498230 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 08:03:55.353147  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:55.373582  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:55.407651  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:55.412530  498230 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:03:55.412551  498230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:03:55.412612  498230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:03:55.451817  498230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:03:55.640493  498230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:03:55.678927  498230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:03:55.762901  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 08:03:55.762923  498230 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 08:03:55.765218  498230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:03:55.807814  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 08:03:55.807890  498230 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 08:03:55.852773  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 08:03:55.852853  498230 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 08:03:55.959800  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 08:03:55.959873  498230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 08:03:56.012484  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 08:03:56.012567  498230 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 08:03:56.052324  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 08:03:56.052400  498230 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 08:03:56.083493  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 08:03:56.083594  498230 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 08:03:56.103149  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 08:03:56.103231  498230 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 08:03:56.131128  498230 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:03:56.131204  498230 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 08:03:56.155536  498230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 08:03:54.654211  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	W1002 08:03:56.656714  495337 pod_ready.go:104] pod "coredns-66bc5c9577-74zfp" is not "Ready", error: <nil>
	I1002 08:03:58.654288  495337 pod_ready.go:94] pod "coredns-66bc5c9577-74zfp" is "Ready"
	I1002 08:03:58.654318  495337 pod_ready.go:86] duration metric: took 35.506240279s for pod "coredns-66bc5c9577-74zfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.657303  495337 pod_ready.go:83] waiting for pod "etcd-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.666220  495337 pod_ready.go:94] pod "etcd-no-preload-604182" is "Ready"
	I1002 08:03:58.666249  495337 pod_ready.go:86] duration metric: took 8.918318ms for pod "etcd-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.668675  495337 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.672524  495337 pod_ready.go:94] pod "kube-apiserver-no-preload-604182" is "Ready"
	I1002 08:03:58.672547  495337 pod_ready.go:86] duration metric: took 3.851383ms for pod "kube-apiserver-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.674982  495337 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:58.852730  495337 pod_ready.go:94] pod "kube-controller-manager-no-preload-604182" is "Ready"
	I1002 08:03:58.852808  495337 pod_ready.go:86] duration metric: took 177.764732ms for pod "kube-controller-manager-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:59.052005  495337 pod_ready.go:83] waiting for pod "kube-proxy-qn6pp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:59.451880  495337 pod_ready.go:94] pod "kube-proxy-qn6pp" is "Ready"
	I1002 08:03:59.451949  495337 pod_ready.go:86] duration metric: took 399.865487ms for pod "kube-proxy-qn6pp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:03:59.652429  495337 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:00.084547  495337 pod_ready.go:94] pod "kube-scheduler-no-preload-604182" is "Ready"
	I1002 08:04:00.084580  495337 pod_ready.go:86] duration metric: took 432.111742ms for pod "kube-scheduler-no-preload-604182" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:00.084606  495337 pod_ready.go:40] duration metric: took 36.941919729s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:04:00.356423  495337 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:04:00.379203  495337 out.go:179] * Done! kubectl is now configured to use "no-preload-604182" cluster and "default" namespace by default
	I1002 08:04:02.810985  498230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.17044965s)
	I1002 08:04:02.811040  498230 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.13208905s)
	I1002 08:04:02.811068  498230 node_ready.go:35] waiting up to 6m0s for node "embed-certs-171347" to be "Ready" ...
	I1002 08:04:02.811385  498230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.046144537s)
	I1002 08:04:02.853548  498230 node_ready.go:49] node "embed-certs-171347" is "Ready"
	I1002 08:04:02.853628  498230 node_ready.go:38] duration metric: took 42.547133ms for node "embed-certs-171347" to be "Ready" ...
	I1002 08:04:02.853755  498230 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:04:02.853871  498230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:04:02.948386  498230 api_server.go:72] duration metric: took 7.667380746s to wait for apiserver process to appear ...
	I1002 08:04:02.948458  498230 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:04:02.948493  498230 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 08:04:02.948710  498230 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.793098685s)
	I1002 08:04:02.952052  498230 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-171347 addons enable metrics-server
	
	I1002 08:04:02.954957  498230 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 08:04:02.957881  498230 addons.go:514] duration metric: took 7.676326953s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 08:04:02.961604  498230 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 08:04:02.962755  498230 api_server.go:141] control plane version: v1.34.1
	I1002 08:04:02.962773  498230 api_server.go:131] duration metric: took 14.294458ms to wait for apiserver health ...
	I1002 08:04:02.962782  498230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:04:02.971031  498230 system_pods.go:59] 8 kube-system pods found
	I1002 08:04:02.971137  498230 system_pods.go:61] "coredns-66bc5c9577-h88d8" [2f1ec40b-c756-4c21-b68c-293d99715917] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:04:02.971164  498230 system_pods.go:61] "etcd-embed-certs-171347" [926ce91c-0431-4ba1-b44e-fffbf0775a3b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:04:02.971201  498230 system_pods.go:61] "kindnet-q6rpr" [debb56b0-5037-4c8f-83f9-277929580103] Running
	I1002 08:04:02.971231  498230 system_pods.go:61] "kube-apiserver-embed-certs-171347" [e47c2d75-962d-4fcc-b386-ca8894e72519] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:04:02.971258  498230 system_pods.go:61] "kube-controller-manager-embed-certs-171347" [d161f53c-5955-4fee-b51b-766596a6970c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:04:02.971298  498230 system_pods.go:61] "kube-proxy-jzmxf" [0bb71089-73b5-4b6c-92cd-0c4ba1aee456] Running
	I1002 08:04:02.971329  498230 system_pods.go:61] "kube-scheduler-embed-certs-171347" [8fbc6745-47c9-43ca-af46-4746f82e41f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:04:02.971355  498230 system_pods.go:61] "storage-provisioner" [b206ffb9-0004-486d-98ff-d23a63b69555] Running
	I1002 08:04:02.971389  498230 system_pods.go:74] duration metric: took 8.598102ms to wait for pod list to return data ...
	I1002 08:04:02.971416  498230 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:04:02.977526  498230 default_sa.go:45] found service account: "default"
	I1002 08:04:02.977603  498230 default_sa.go:55] duration metric: took 6.168262ms for default service account to be created ...
	I1002 08:04:02.977628  498230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:04:02.981608  498230 system_pods.go:86] 8 kube-system pods found
	I1002 08:04:02.981688  498230 system_pods.go:89] "coredns-66bc5c9577-h88d8" [2f1ec40b-c756-4c21-b68c-293d99715917] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:04:02.981713  498230 system_pods.go:89] "etcd-embed-certs-171347" [926ce91c-0431-4ba1-b44e-fffbf0775a3b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:04:02.981752  498230 system_pods.go:89] "kindnet-q6rpr" [debb56b0-5037-4c8f-83f9-277929580103] Running
	I1002 08:04:02.981781  498230 system_pods.go:89] "kube-apiserver-embed-certs-171347" [e47c2d75-962d-4fcc-b386-ca8894e72519] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:04:02.981805  498230 system_pods.go:89] "kube-controller-manager-embed-certs-171347" [d161f53c-5955-4fee-b51b-766596a6970c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:04:02.981842  498230 system_pods.go:89] "kube-proxy-jzmxf" [0bb71089-73b5-4b6c-92cd-0c4ba1aee456] Running
	I1002 08:04:02.981872  498230 system_pods.go:89] "kube-scheduler-embed-certs-171347" [8fbc6745-47c9-43ca-af46-4746f82e41f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:04:02.981897  498230 system_pods.go:89] "storage-provisioner" [b206ffb9-0004-486d-98ff-d23a63b69555] Running
	I1002 08:04:02.981937  498230 system_pods.go:126] duration metric: took 4.28671ms to wait for k8s-apps to be running ...
	I1002 08:04:02.981964  498230 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:04:02.982052  498230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:04:02.999501  498230 system_svc.go:56] duration metric: took 17.528046ms WaitForService to wait for kubelet
	I1002 08:04:02.999575  498230 kubeadm.go:586] duration metric: took 7.718574301s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:04:02.999627  498230 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:04:03.009383  498230 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:04:03.009487  498230 node_conditions.go:123] node cpu capacity is 2
	I1002 08:04:03.009519  498230 node_conditions.go:105] duration metric: took 9.87077ms to run NodePressure ...
	I1002 08:04:03.009563  498230 start.go:241] waiting for startup goroutines ...
	I1002 08:04:03.009589  498230 start.go:246] waiting for cluster config update ...
	I1002 08:04:03.009617  498230 start.go:255] writing updated cluster config ...
	I1002 08:04:03.010049  498230 ssh_runner.go:195] Run: rm -f paused
	I1002 08:04:03.014702  498230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:04:03.074165  498230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h88d8" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 08:04:05.129445  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:07.594768  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:10.082113  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:12.085671  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:14.086215  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:16.581535  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.583960275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.62519821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.66051678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.702469692Z" level=info msg="Created container 3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq/dashboard-metrics-scraper" id=ff2b9d97-77a6-4d8c-9e40-2cbeefb45054 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.72008368Z" level=info msg="Starting container: 3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea" id=3733b45f-4e4c-48f2-b11e-056c4fa0fdb2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.734948563Z" level=info msg="Started container" PID=1633 containerID=3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq/dashboard-metrics-scraper id=3733b45f-4e4c-48f2-b11e-056c4fa0fdb2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9258d27863ab661daf971138271a561550cd298af24cac74d296aecbe1931594
	Oct 02 08:04:00 no-preload-604182 conmon[1631]: conmon 3810207ffcd2ec126a0d <ninfo>: container 1633 exited with status 1
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.828981767Z" level=info msg="Removing container: 2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957" id=b5bb2773-a9ab-4978-ba5a-08317395ebad name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.843420013Z" level=info msg="Error loading conmon cgroup of container 2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957: cgroup deleted" id=b5bb2773-a9ab-4978-ba5a-08317395ebad name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:04:00 no-preload-604182 crio[651]: time="2025-10-02T08:04:00.848130776Z" level=info msg="Removed container 2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq/dashboard-metrics-scraper" id=b5bb2773-a9ab-4978-ba5a-08317395ebad name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.505911663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.510733074Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.510764631Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.510785095Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.515521713Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.515675848Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.515760887Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.519848604Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.520011814Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.520081763Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.529326074Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.529487528Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.529576185Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.535904744Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:02 no-preload-604182 crio[651]: time="2025-10-02T08:04:02.536098879Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3810207ffcd2e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   9258d27863ab6       dashboard-metrics-scraper-6ffb444bf9-z9xnq   kubernetes-dashboard
	cf91d8320e31d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           25 seconds ago       Running             storage-provisioner         2                   02c3f20713407       storage-provisioner                          kube-system
	7085a5c11d906       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   891e06acb5da5       kubernetes-dashboard-855c9754f9-dmlvr        kubernetes-dashboard
	c8704e5c1cedb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   53c917b65991e       coredns-66bc5c9577-74zfp                     kube-system
	5891601f179a9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   ecdfd6640e7a4       busybox                                      default
	941fe1e375ab8       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           56 seconds ago       Exited              storage-provisioner         1                   02c3f20713407       storage-provisioner                          kube-system
	149d1489fe735       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   a7afc74a47b50       kube-proxy-qn6pp                             kube-system
	345b977a1a5ae       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   f34c6133c7796       kindnet-5zjv7                                kube-system
	4164431db5f86       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   670f209653373       etcd-no-preload-604182                       kube-system
	3e1fc7a1946e3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   92398963260b1       kube-apiserver-no-preload-604182             kube-system
	77029f6aa5b62       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   7e33fca238475       kube-controller-manager-no-preload-604182    kube-system
	3094807a90d6d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   d50d3665a574e       kube-scheduler-no-preload-604182             kube-system
	
	
	==> coredns [c8704e5c1cedb8c825267c9042fc932867f91cfb0f2a0998dac40e8955311969] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58507 - 45460 "HINFO IN 6588191415132121994.7007080049064169055. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005424731s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-604182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-604182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=no-preload-604182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_02_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-604182
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:04:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:04:12 +0000   Thu, 02 Oct 2025 08:02:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:04:12 +0000   Thu, 02 Oct 2025 08:02:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:04:12 +0000   Thu, 02 Oct 2025 08:02:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:04:12 +0000   Thu, 02 Oct 2025 08:02:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-604182
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 701c78548bff440eb2e4480981a54c06
	  System UUID:                65f354cd-b030-437d-9beb-12ea491c6172
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-74zfp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-no-preload-604182                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-5zjv7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-604182              250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-604182     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-qn6pp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-604182              100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-z9xnq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dmlvr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 111s                   kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node no-preload-604182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node no-preload-604182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node no-preload-604182 status is now: NodeHasSufficientPID
	  Normal   Starting                 118s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 118s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    117s                   kubelet          Node no-preload-604182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s                   kubelet          Node no-preload-604182 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  117s                   kubelet          Node no-preload-604182 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           113s                   node-controller  Node no-preload-604182 event: Registered Node no-preload-604182 in Controller
	  Normal   NodeReady                96s                    kubelet          Node no-preload-604182 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node no-preload-604182 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node no-preload-604182 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node no-preload-604182 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node no-preload-604182 event: Registered Node no-preload-604182 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:33] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4164431db5f8614c900dab52a55fbc230192e5350939fe5d0d56bfc4b9f37616] <==
	{"level":"warn","ts":"2025-10-02T08:03:19.050035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.073550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.107736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.123663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.144502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.169834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.179612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.200281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.221741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.260395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.281619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.291940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.308143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.337391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.355933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.396999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.439520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.478424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.496259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.532138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.542153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.588065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.615869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.639537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:19.789256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53496","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:04:18 up  2:46,  0 user,  load average: 5.12, 3.29, 2.29
	Linux no-preload-604182 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [345b977a1a5ae88c18319ef442b740dfcd5b6f2cff29fd84e21439458e7a131c] <==
	I1002 08:03:22.237961       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:03:22.299312       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 08:03:22.299563       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:03:22.299615       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:03:22.299657       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:03:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:03:22.505046       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:03:22.505075       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:03:22.505094       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:03:22.510996       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:03:52.506015       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 08:03:52.506156       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 08:03:52.511645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 08:03:52.511758       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1002 08:03:53.805860       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:03:53.805896       1 metrics.go:72] Registering metrics
	I1002 08:03:53.805961       1 controller.go:711] "Syncing nftables rules"
	I1002 08:04:02.505646       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:04:02.505698       1 main.go:301] handling current node
	I1002 08:04:12.508052       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:04:12.508104       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e1fc7a1946e3a39d39fe7e56e659a01f9a77a1b064829ae68f8e7533e1798bc] <==
	I1002 08:03:21.170266       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 08:03:21.170275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 08:03:21.203418       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 08:03:21.223621       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 08:03:21.224184       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 08:03:21.224365       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 08:03:21.224373       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 08:03:21.224471       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 08:03:21.260545       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:03:21.270848       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 08:03:21.281492       1 cache.go:39] Caches are synced for autoregister controller
	E1002 08:03:21.303978       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 08:03:21.351134       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 08:03:21.351251       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 08:03:21.573030       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:03:21.630178       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:03:22.170937       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:03:22.308695       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:03:22.398670       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:03:22.442814       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:03:22.564420       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.53.77"}
	I1002 08:03:22.616775       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.133.8"}
	I1002 08:03:25.648125       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 08:03:25.776558       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:03:25.803696       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [77029f6aa5b6233463612c47bb436aebdb6578cbd16ee091398e61c2c07d4608] <==
	I1002 08:03:25.219200       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 08:03:25.219242       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 08:03:25.219378       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 08:03:25.223146       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 08:03:25.224362       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 08:03:25.226669       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:03:25.226745       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 08:03:25.231501       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 08:03:25.232586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 08:03:25.233835       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 08:03:25.235110       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 08:03:25.240945       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 08:03:25.241143       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 08:03:25.241776       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 08:03:25.242749       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 08:03:25.243047       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 08:03:25.243719       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 08:03:25.247869       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 08:03:25.248015       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-604182"
	I1002 08:03:25.248069       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 08:03:25.248121       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:03:25.259339       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 08:03:25.266283       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 08:03:25.268689       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 08:03:25.279962       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [149d1489fe735da77b26aff3ec794c7c79ef2de589160921415fb965adcead0f] <==
	I1002 08:03:22.625279       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:03:23.244671       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:03:23.353944       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:03:23.355093       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 08:03:23.355172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:03:23.402345       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:03:23.402410       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:03:23.407345       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:03:23.407712       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:03:23.407895       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:03:23.409117       1 config.go:200] "Starting service config controller"
	I1002 08:03:23.409182       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:03:23.409238       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:03:23.409268       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:03:23.409303       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:03:23.409328       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:03:23.410056       1 config.go:309] "Starting node config controller"
	I1002 08:03:23.410130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:03:23.410162       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:03:23.515365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:03:23.515410       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:03:23.515454       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3094807a90d6dcd41655425e2f8000995d5181c4b8e85810c853b4db03b96dc4] <==
	I1002 08:03:21.745637       1 serving.go:386] Generated self-signed cert in-memory
	I1002 08:03:24.870945       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 08:03:24.870985       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:03:24.879372       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 08:03:24.879484       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 08:03:24.879555       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:03:24.879588       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:03:24.879638       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:03:24.879709       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:03:24.879825       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:03:24.879950       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:03:24.979824       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 08:03:24.979824       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:03:24.979858       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 08:03:25 no-preload-604182 kubelet[769]: I1002 08:03:25.909996     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzk4r\" (UniqueName: \"kubernetes.io/projected/51c88494-20e6-4c12-ba1b-f8f2acc204ee-kube-api-access-fzk4r\") pod \"dashboard-metrics-scraper-6ffb444bf9-z9xnq\" (UID: \"51c88494-20e6-4c12-ba1b-f8f2acc204ee\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq"
	Oct 02 08:03:25 no-preload-604182 kubelet[769]: I1002 08:03:25.910022     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7a97e796-8ad8-47b3-8086-2f9a8da34762-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dmlvr\" (UID: \"7a97e796-8ad8-47b3-8086-2f9a8da34762\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dmlvr"
	Oct 02 08:03:25 no-preload-604182 kubelet[769]: I1002 08:03:25.910042     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/51c88494-20e6-4c12-ba1b-f8f2acc204ee-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-z9xnq\" (UID: \"51c88494-20e6-4c12-ba1b-f8f2acc204ee\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq"
	Oct 02 08:03:26 no-preload-604182 kubelet[769]: W1002 08:03:26.101899     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/crio-9258d27863ab661daf971138271a561550cd298af24cac74d296aecbe1931594 WatchSource:0}: Error finding container 9258d27863ab661daf971138271a561550cd298af24cac74d296aecbe1931594: Status 404 returned error can't find the container with id 9258d27863ab661daf971138271a561550cd298af24cac74d296aecbe1931594
	Oct 02 08:03:26 no-preload-604182 kubelet[769]: W1002 08:03:26.102345     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/eb7634b68495aa72a22211895a0c66a540f3ef0c6a54103922964cdb35e597bd/crio-891e06acb5da58b5ca866aaede14f35fd7f71916a1cdb3d93394e45a27df9845 WatchSource:0}: Error finding container 891e06acb5da58b5ca866aaede14f35fd7f71916a1cdb3d93394e45a27df9845: Status 404 returned error can't find the container with id 891e06acb5da58b5ca866aaede14f35fd7f71916a1cdb3d93394e45a27df9845
	Oct 02 08:03:28 no-preload-604182 kubelet[769]: I1002 08:03:28.181961     769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 08:03:37 no-preload-604182 kubelet[769]: I1002 08:03:37.769175     769 scope.go:117] "RemoveContainer" containerID="badde932c30f4c93c898313fb350420a0312ad10e6a9ab4acc18ce74368761ff"
	Oct 02 08:03:37 no-preload-604182 kubelet[769]: I1002 08:03:37.788761     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dmlvr" podStartSLOduration=7.824294946 podStartE2EDuration="12.788742572s" podCreationTimestamp="2025-10-02 08:03:25 +0000 UTC" firstStartedPulling="2025-10-02 08:03:26.106708946 +0000 UTC m=+10.754949390" lastFinishedPulling="2025-10-02 08:03:31.071156563 +0000 UTC m=+15.719397016" observedRunningTime="2025-10-02 08:03:31.817011262 +0000 UTC m=+16.465251789" watchObservedRunningTime="2025-10-02 08:03:37.788742572 +0000 UTC m=+22.436983016"
	Oct 02 08:03:38 no-preload-604182 kubelet[769]: I1002 08:03:38.774470     769 scope.go:117] "RemoveContainer" containerID="badde932c30f4c93c898313fb350420a0312ad10e6a9ab4acc18ce74368761ff"
	Oct 02 08:03:38 no-preload-604182 kubelet[769]: I1002 08:03:38.775605     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:03:38 no-preload-604182 kubelet[769]: E1002 08:03:38.775933     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:03:39 no-preload-604182 kubelet[769]: I1002 08:03:39.778618     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:03:39 no-preload-604182 kubelet[769]: E1002 08:03:39.778777     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:03:46 no-preload-604182 kubelet[769]: I1002 08:03:46.067584     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:03:46 no-preload-604182 kubelet[769]: E1002 08:03:46.067828     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:03:52 no-preload-604182 kubelet[769]: I1002 08:03:52.807715     769 scope.go:117] "RemoveContainer" containerID="941fe1e375ab8b5c7819755f3d2feb5bcdaf2abeb7390f95036f174f13325d9f"
	Oct 02 08:04:00 no-preload-604182 kubelet[769]: I1002 08:04:00.549102     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:04:00 no-preload-604182 kubelet[769]: I1002 08:04:00.827832     769 scope.go:117] "RemoveContainer" containerID="2a110470de45a0e33b69cbca1949f3f4b22f916da7e5362d43a8dcb892643957"
	Oct 02 08:04:01 no-preload-604182 kubelet[769]: I1002 08:04:01.831629     769 scope.go:117] "RemoveContainer" containerID="3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea"
	Oct 02 08:04:01 no-preload-604182 kubelet[769]: E1002 08:04:01.831810     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:04:06 no-preload-604182 kubelet[769]: I1002 08:04:06.067450     769 scope.go:117] "RemoveContainer" containerID="3810207ffcd2ec126a0d091f5c46901cf5991af720346d8e2ae59ddae078ecea"
	Oct 02 08:04:06 no-preload-604182 kubelet[769]: E1002 08:04:06.067679     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z9xnq_kubernetes-dashboard(51c88494-20e6-4c12-ba1b-f8f2acc204ee)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z9xnq" podUID="51c88494-20e6-4c12-ba1b-f8f2acc204ee"
	Oct 02 08:04:13 no-preload-604182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:04:13 no-preload-604182 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:04:13 no-preload-604182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7085a5c11d9068aa1530ac0f9c639bae1b8214bbc8ae69419c1885816bfc2422] <==
	2025/10/02 08:03:31 Starting overwatch
	2025/10/02 08:03:31 Using namespace: kubernetes-dashboard
	2025/10/02 08:03:31 Using in-cluster config to connect to apiserver
	2025/10/02 08:03:31 Using secret token for csrf signing
	2025/10/02 08:03:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 08:03:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 08:03:31 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 08:03:31 Generating JWE encryption key
	2025/10/02 08:03:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 08:03:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 08:03:33 Initializing JWE encryption key from synchronized object
	2025/10/02 08:03:33 Creating in-cluster Sidecar client
	2025/10/02 08:03:33 Serving insecurely on HTTP port: 9090
	2025/10/02 08:03:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:04:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [941fe1e375ab8b5c7819755f3d2feb5bcdaf2abeb7390f95036f174f13325d9f] <==
	I1002 08:03:22.267922       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 08:03:52.269497       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cf91d8320e31d1cdb4432930b9af5cdeb2b936e99b90b8a85fab2f65fd803d34] <==
	I1002 08:03:52.894268       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 08:03:52.907257       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:03:52.907365       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 08:03:52.909637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:03:56.364561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:00.640682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:04.238886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:07.293609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:10.315594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:10.320785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:04:10.320939       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:04:10.321121       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-604182_2645f5b0-8a9a-4e1b-b4b3-cbb2009532e8!
	I1002 08:04:10.321852       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce44b2f7-3b72-4264-8678-b29a955c98d3", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-604182_2645f5b0-8a9a-4e1b-b4b3-cbb2009532e8 became leader
	W1002 08:04:10.326303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:10.333017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:04:10.422131       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-604182_2645f5b0-8a9a-4e1b-b4b3-cbb2009532e8!
	W1002 08:04:12.343421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:12.358559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:14.362487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:14.367348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:16.371032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:16.383670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:18.388935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:18.401193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-604182 -n no-preload-604182
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-604182 -n no-preload-604182: exit status 2 (381.672304ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-604182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-171347 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-171347 --alsologtostderr -v=1: exit status 80 (1.936643674s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-171347 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 08:04:57.594673  504082 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:04:57.594858  504082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:04:57.594873  504082 out.go:374] Setting ErrFile to fd 2...
	I1002 08:04:57.594877  504082 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:04:57.595160  504082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:04:57.595406  504082 out.go:368] Setting JSON to false
	I1002 08:04:57.595421  504082 mustload.go:65] Loading cluster: embed-certs-171347
	I1002 08:04:57.595825  504082 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:04:57.596271  504082 cli_runner.go:164] Run: docker container inspect embed-certs-171347 --format={{.State.Status}}
	I1002 08:04:57.620678  504082 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:04:57.621000  504082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:04:57.690181  504082 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 08:04:57.680636531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:04:57.690851  504082 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-171347 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 08:04:57.694153  504082 out.go:179] * Pausing node embed-certs-171347 ... 
	I1002 08:04:57.698052  504082 host.go:66] Checking if "embed-certs-171347" exists ...
	I1002 08:04:57.698421  504082 ssh_runner.go:195] Run: systemctl --version
	I1002 08:04:57.698473  504082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-171347
	I1002 08:04:57.718392  504082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/embed-certs-171347/id_rsa Username:docker}
	I1002 08:04:57.818359  504082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:04:57.832354  504082 pause.go:51] kubelet running: true
	I1002 08:04:57.832428  504082 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:04:58.154486  504082 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:04:58.154574  504082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:04:58.230949  504082 cri.go:89] found id: "db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68"
	I1002 08:04:58.230970  504082 cri.go:89] found id: "a0a6e0d90ca3f8c60830a35bbff243ab0113bb82f6b41ee268f2abb9cf210599"
	I1002 08:04:58.230975  504082 cri.go:89] found id: "600c1f2a64fc29e5faea39640a8f0c02a79132bb36624fecd3cf771143b4199e"
	I1002 08:04:58.230984  504082 cri.go:89] found id: "fe1a50b0490de1a057bb2439be07b143442b3be835d66e0e05add86a488991b7"
	I1002 08:04:58.230987  504082 cri.go:89] found id: "cef0953b0e3f7851b931442373cc005c869dafa2e5b3791570a189edfeed70be"
	I1002 08:04:58.230995  504082 cri.go:89] found id: "6f3ca884c1303597bf9de27670995129fac9974f29ec3998eefcb79f460f2608"
	I1002 08:04:58.230999  504082 cri.go:89] found id: "a3295c18de5cd39930de6a29eafc9bfeb208a5f01b6be0d2f865fafae39a8562"
	I1002 08:04:58.231002  504082 cri.go:89] found id: "19e7d5d7bdca5512898a0c121ad4ff851265a3f8cf6c48dddb1e91460e0e5e12"
	I1002 08:04:58.231006  504082 cri.go:89] found id: "85a09c19828ce281864f49326c73b8b58d618d6e28f38bb8d34c435302ffd907"
	I1002 08:04:58.231037  504082 cri.go:89] found id: "036a049d4d170fc6f94d917d0b70ffeec2e78e29355403e588a6a8c388ef33f1"
	I1002 08:04:58.231045  504082 cri.go:89] found id: "85a6c0112fabfd159eed64bcdbc0d532333c469b91bb51b8b81cceaa57497dfa"
	I1002 08:04:58.231049  504082 cri.go:89] found id: ""
	I1002 08:04:58.231151  504082 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:04:58.243682  504082 retry.go:31] will retry after 282.68842ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:04:58Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:04:58.526960  504082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:04:58.543560  504082 pause.go:51] kubelet running: false
	I1002 08:04:58.543709  504082 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:04:58.747746  504082 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:04:58.747913  504082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:04:58.824545  504082 cri.go:89] found id: "db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68"
	I1002 08:04:58.824611  504082 cri.go:89] found id: "a0a6e0d90ca3f8c60830a35bbff243ab0113bb82f6b41ee268f2abb9cf210599"
	I1002 08:04:58.824644  504082 cri.go:89] found id: "600c1f2a64fc29e5faea39640a8f0c02a79132bb36624fecd3cf771143b4199e"
	I1002 08:04:58.824667  504082 cri.go:89] found id: "fe1a50b0490de1a057bb2439be07b143442b3be835d66e0e05add86a488991b7"
	I1002 08:04:58.824687  504082 cri.go:89] found id: "cef0953b0e3f7851b931442373cc005c869dafa2e5b3791570a189edfeed70be"
	I1002 08:04:58.824722  504082 cri.go:89] found id: "6f3ca884c1303597bf9de27670995129fac9974f29ec3998eefcb79f460f2608"
	I1002 08:04:58.824747  504082 cri.go:89] found id: "a3295c18de5cd39930de6a29eafc9bfeb208a5f01b6be0d2f865fafae39a8562"
	I1002 08:04:58.824769  504082 cri.go:89] found id: "19e7d5d7bdca5512898a0c121ad4ff851265a3f8cf6c48dddb1e91460e0e5e12"
	I1002 08:04:58.824805  504082 cri.go:89] found id: "85a09c19828ce281864f49326c73b8b58d618d6e28f38bb8d34c435302ffd907"
	I1002 08:04:58.824831  504082 cri.go:89] found id: "036a049d4d170fc6f94d917d0b70ffeec2e78e29355403e588a6a8c388ef33f1"
	I1002 08:04:58.824854  504082 cri.go:89] found id: "85a6c0112fabfd159eed64bcdbc0d532333c469b91bb51b8b81cceaa57497dfa"
	I1002 08:04:58.824888  504082 cri.go:89] found id: ""
	I1002 08:04:58.824984  504082 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:04:58.836046  504082 retry.go:31] will retry after 274.786431ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:04:58Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:04:59.111651  504082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:04:59.130437  504082 pause.go:51] kubelet running: false
	I1002 08:04:59.130514  504082 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:04:59.355573  504082 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:04:59.355699  504082 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:04:59.432728  504082 cri.go:89] found id: "db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68"
	I1002 08:04:59.432750  504082 cri.go:89] found id: "a0a6e0d90ca3f8c60830a35bbff243ab0113bb82f6b41ee268f2abb9cf210599"
	I1002 08:04:59.432755  504082 cri.go:89] found id: "600c1f2a64fc29e5faea39640a8f0c02a79132bb36624fecd3cf771143b4199e"
	I1002 08:04:59.432758  504082 cri.go:89] found id: "fe1a50b0490de1a057bb2439be07b143442b3be835d66e0e05add86a488991b7"
	I1002 08:04:59.432762  504082 cri.go:89] found id: "cef0953b0e3f7851b931442373cc005c869dafa2e5b3791570a189edfeed70be"
	I1002 08:04:59.432766  504082 cri.go:89] found id: "6f3ca884c1303597bf9de27670995129fac9974f29ec3998eefcb79f460f2608"
	I1002 08:04:59.432769  504082 cri.go:89] found id: "a3295c18de5cd39930de6a29eafc9bfeb208a5f01b6be0d2f865fafae39a8562"
	I1002 08:04:59.432772  504082 cri.go:89] found id: "19e7d5d7bdca5512898a0c121ad4ff851265a3f8cf6c48dddb1e91460e0e5e12"
	I1002 08:04:59.432775  504082 cri.go:89] found id: "85a09c19828ce281864f49326c73b8b58d618d6e28f38bb8d34c435302ffd907"
	I1002 08:04:59.432782  504082 cri.go:89] found id: "036a049d4d170fc6f94d917d0b70ffeec2e78e29355403e588a6a8c388ef33f1"
	I1002 08:04:59.432785  504082 cri.go:89] found id: "85a6c0112fabfd159eed64bcdbc0d532333c469b91bb51b8b81cceaa57497dfa"
	I1002 08:04:59.432788  504082 cri.go:89] found id: ""
	I1002 08:04:59.432843  504082 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:04:59.447375  504082 out.go:203] 
	W1002 08:04:59.450240  504082 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:04:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:04:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 08:04:59.450265  504082 out.go:285] * 
	* 
	W1002 08:04:59.455939  504082 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 08:04:59.459038  504082 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-171347 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-171347
helpers_test.go:243: (dbg) docker inspect embed-certs-171347:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa",
	        "Created": "2025-10-02T08:02:00.578455149Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 498358,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:03:47.492578568Z",
	            "FinishedAt": "2025-10-02T08:03:46.439404398Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/hostname",
	        "HostsPath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/hosts",
	        "LogPath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa-json.log",
	        "Name": "/embed-certs-171347",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-171347:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-171347",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa",
	                "LowerDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-171347",
	                "Source": "/var/lib/docker/volumes/embed-certs-171347/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-171347",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-171347",
	                "name.minikube.sigs.k8s.io": "embed-certs-171347",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7556ba907aaf720a1498b85d8d8dee950c078f679696fd827c10f9855bda88b8",
	            "SandboxKey": "/var/run/docker/netns/7556ba907aaf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-171347": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:13:cc:c8:ec:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02e39ca8e594ec82c902deecf74b9a14d44881e9835232c2f729a3d1bc104bcc",
	                    "EndpointID": "5b6c6499a7c6289b6110e44f904fbd4bd35f5b09a64b3c061e7e83456b9d18bb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-171347",
	                        "411784c5c3f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171347 -n embed-certs-171347
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171347 -n embed-certs-171347: exit status 2 (435.236058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-171347 logs -n 25
E1002 08:05:01.410018  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-171347 logs -n 25: (2.303041051s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-356986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:01 UTC │
	│ image   │ old-k8s-version-356986 image list --format=json                                                                                                                                                                                               │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ pause   │ -p old-k8s-version-356986 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │                     │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:02 UTC │
	│ delete  │ -p cert-expiration-759246                                                                                                                                                                                                                     │ cert-expiration-759246       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │                     │
	│ stop    │ -p no-preload-604182 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p no-preload-604182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ stop    │ -p embed-certs-171347 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-171347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ image   │ no-preload-604182 image list --format=json                                                                                                                                                                                                    │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p disable-driver-mounts-466206                                                                                                                                                                                                               │ disable-driver-mounts-466206 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ image   │ embed-certs-171347 image list --format=json                                                                                                                                                                                                   │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p embed-certs-171347 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:04:22
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:04:22.860282  501823 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:04:22.860492  501823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:04:22.860522  501823 out.go:374] Setting ErrFile to fd 2...
	I1002 08:04:22.860542  501823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:04:22.860958  501823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:04:22.861971  501823 out.go:368] Setting JSON to false
	I1002 08:04:22.863000  501823 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10014,"bootTime":1759382249,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:04:22.863072  501823 start.go:140] virtualization:  
	I1002 08:04:22.866925  501823 out.go:179] * [default-k8s-diff-port-417078] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:04:22.870790  501823 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:04:22.870964  501823 notify.go:220] Checking for updates...
	I1002 08:04:22.876733  501823 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:04:22.879695  501823 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:04:22.882675  501823 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:04:22.885522  501823 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:04:22.888512  501823 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:04:22.892077  501823 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:04:22.892210  501823 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:04:22.927308  501823 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:04:22.927477  501823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:04:22.986024  501823 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:04:22.976277267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:04:22.986140  501823 docker.go:318] overlay module found
	I1002 08:04:22.989212  501823 out.go:179] * Using the docker driver based on user configuration
	I1002 08:04:22.992149  501823 start.go:304] selected driver: docker
	I1002 08:04:22.992173  501823 start.go:924] validating driver "docker" against <nil>
	I1002 08:04:22.992187  501823 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:04:22.992923  501823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:04:23.053338  501823 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:04:23.043636296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:04:23.053493  501823 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 08:04:23.053734  501823 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:04:23.056748  501823 out.go:179] * Using Docker driver with root privileges
	I1002 08:04:23.059792  501823 cni.go:84] Creating CNI manager for ""
	I1002 08:04:23.059892  501823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:04:23.059908  501823 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 08:04:23.060008  501823 start.go:348] cluster config:
	{Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:04:23.063399  501823 out.go:179] * Starting "default-k8s-diff-port-417078" primary control-plane node in "default-k8s-diff-port-417078" cluster
	I1002 08:04:23.066373  501823 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:04:23.069433  501823 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:04:23.072386  501823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:04:23.072456  501823 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:04:23.072467  501823 cache.go:58] Caching tarball of preloaded images
	I1002 08:04:23.072501  501823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:04:23.072674  501823 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:04:23.072696  501823 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:04:23.072862  501823 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/config.json ...
	I1002 08:04:23.072904  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/config.json: {Name:mk5bd9a340e6b1688dec5bc4670402c65cc73620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:23.098230  501823 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:04:23.098255  501823 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:04:23.098283  501823 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:04:23.098306  501823 start.go:360] acquireMachinesLock for default-k8s-diff-port-417078: {Name:mk71638280421d86b548f4ec42a5f6c5c61e1f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:04:23.098422  501823 start.go:364] duration metric: took 95.501µs to acquireMachinesLock for "default-k8s-diff-port-417078"
	I1002 08:04:23.098453  501823 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:04:23.098532  501823 start.go:125] createHost starting for "" (driver="docker")
	W1002 08:04:23.081317  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:25.579832  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:23.102357  501823 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 08:04:23.102641  501823 start.go:159] libmachine.API.Create for "default-k8s-diff-port-417078" (driver="docker")
	I1002 08:04:23.102688  501823 client.go:168] LocalClient.Create starting
	I1002 08:04:23.102767  501823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 08:04:23.102801  501823 main.go:141] libmachine: Decoding PEM data...
	I1002 08:04:23.102814  501823 main.go:141] libmachine: Parsing certificate...
	I1002 08:04:23.102875  501823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 08:04:23.102899  501823 main.go:141] libmachine: Decoding PEM data...
	I1002 08:04:23.102918  501823 main.go:141] libmachine: Parsing certificate...
	I1002 08:04:23.103319  501823 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 08:04:23.121916  501823 cli_runner.go:211] docker network inspect default-k8s-diff-port-417078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 08:04:23.121999  501823 network_create.go:284] running [docker network inspect default-k8s-diff-port-417078] to gather additional debugging logs...
	I1002 08:04:23.122016  501823 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417078
	W1002 08:04:23.148336  501823 cli_runner.go:211] docker network inspect default-k8s-diff-port-417078 returned with exit code 1
	I1002 08:04:23.148367  501823 network_create.go:287] error running [docker network inspect default-k8s-diff-port-417078]: docker network inspect default-k8s-diff-port-417078: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-417078 not found
	I1002 08:04:23.148381  501823 network_create.go:289] output of [docker network inspect default-k8s-diff-port-417078]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-417078 not found
	
	** /stderr **
	I1002 08:04:23.148497  501823 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:04:23.165787  501823 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 08:04:23.166176  501823 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 08:04:23.166383  501823 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 08:04:23.166935  501823 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7a40}
	I1002 08:04:23.166967  501823 network_create.go:124] attempt to create docker network default-k8s-diff-port-417078 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 08:04:23.167039  501823 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-417078 default-k8s-diff-port-417078
	I1002 08:04:23.238347  501823 network_create.go:108] docker network default-k8s-diff-port-417078 192.168.76.0/24 created
	I1002 08:04:23.238393  501823 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-417078" container
	I1002 08:04:23.238491  501823 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 08:04:23.254962  501823 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-417078 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-417078 --label created_by.minikube.sigs.k8s.io=true
	I1002 08:04:23.273102  501823 oci.go:103] Successfully created a docker volume default-k8s-diff-port-417078
	I1002 08:04:23.273187  501823 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-417078-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-417078 --entrypoint /usr/bin/test -v default-k8s-diff-port-417078:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 08:04:23.834615  501823 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-417078
	I1002 08:04:23.834674  501823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:04:23.834696  501823 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 08:04:23.834768  501823 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-417078:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1002 08:04:27.581536  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:30.084872  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:28.312128  501823 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-417078:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.477285476s)
	I1002 08:04:28.312161  501823 kic.go:203] duration metric: took 4.47746147s to extract preloaded images to volume ...
	W1002 08:04:28.312295  501823 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 08:04:28.312411  501823 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 08:04:28.384201  501823 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-417078 --name default-k8s-diff-port-417078 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-417078 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-417078 --network default-k8s-diff-port-417078 --ip 192.168.76.2 --volume default-k8s-diff-port-417078:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 08:04:28.685515  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Running}}
	I1002 08:04:28.715703  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:04:28.737998  501823 cli_runner.go:164] Run: docker exec default-k8s-diff-port-417078 stat /var/lib/dpkg/alternatives/iptables
	I1002 08:04:28.793149  501823 oci.go:144] the created container "default-k8s-diff-port-417078" has a running status.
	I1002 08:04:28.793188  501823 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa...
	I1002 08:04:29.457275  501823 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 08:04:29.477853  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:04:29.496477  501823 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 08:04:29.496501  501823 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-417078 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 08:04:29.537166  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:04:29.556765  501823 machine.go:93] provisionDockerMachine start ...
	I1002 08:04:29.556880  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:29.574419  501823 main.go:141] libmachine: Using SSH client type: native
	I1002 08:04:29.574755  501823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1002 08:04:29.574780  501823 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:04:29.579483  501823 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 08:04:32.713484  501823 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417078
	
	I1002 08:04:32.713509  501823 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-417078"
	I1002 08:04:32.713620  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:32.732077  501823 main.go:141] libmachine: Using SSH client type: native
	I1002 08:04:32.732385  501823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1002 08:04:32.732410  501823 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-417078 && echo "default-k8s-diff-port-417078" | sudo tee /etc/hostname
	I1002 08:04:32.878534  501823 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417078
	
	I1002 08:04:32.878619  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:32.898177  501823 main.go:141] libmachine: Using SSH client type: native
	I1002 08:04:32.898485  501823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1002 08:04:32.898510  501823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-417078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-417078/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-417078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:04:33.032374  501823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:04:33.032410  501823 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:04:33.032458  501823 ubuntu.go:190] setting up certificates
	I1002 08:04:33.032469  501823 provision.go:84] configureAuth start
	I1002 08:04:33.032553  501823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417078
	I1002 08:04:33.052085  501823 provision.go:143] copyHostCerts
	I1002 08:04:33.052159  501823 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:04:33.052174  501823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:04:33.052257  501823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:04:33.052350  501823 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:04:33.052362  501823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:04:33.052390  501823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:04:33.052449  501823 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:04:33.052459  501823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:04:33.052484  501823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:04:33.052538  501823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-417078 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-417078 localhost minikube]
	I1002 08:04:33.338322  501823 provision.go:177] copyRemoteCerts
	I1002 08:04:33.338397  501823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:04:33.338444  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:33.356259  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:33.459439  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 08:04:33.479741  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:04:33.502402  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 08:04:33.526133  501823 provision.go:87] duration metric: took 493.647098ms to configureAuth
	I1002 08:04:33.526253  501823 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:04:33.526456  501823 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:04:33.526595  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:33.543800  501823 main.go:141] libmachine: Using SSH client type: native
	I1002 08:04:33.544112  501823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1002 08:04:33.544134  501823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:04:33.920440  501823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:04:33.920488  501823 machine.go:96] duration metric: took 4.36369803s to provisionDockerMachine
	I1002 08:04:33.920498  501823 client.go:171] duration metric: took 10.817800091s to LocalClient.Create
	I1002 08:04:33.920532  501823 start.go:167] duration metric: took 10.817878689s to libmachine.API.Create "default-k8s-diff-port-417078"
	I1002 08:04:33.920546  501823 start.go:293] postStartSetup for "default-k8s-diff-port-417078" (driver="docker")
	I1002 08:04:33.920556  501823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:04:33.920629  501823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:04:33.920690  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:33.939804  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:34.039687  501823 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:04:34.043396  501823 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:04:34.043429  501823 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:04:34.043442  501823 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:04:34.043541  501823 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:04:34.043694  501823 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:04:34.043808  501823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:04:34.051804  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:04:34.071560  501823 start.go:296] duration metric: took 150.99834ms for postStartSetup
	I1002 08:04:34.071945  501823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417078
	I1002 08:04:34.093507  501823 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/config.json ...
	I1002 08:04:34.093795  501823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:04:34.093844  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:34.111986  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:34.212053  501823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:04:34.216964  501823 start.go:128] duration metric: took 11.118415198s to createHost
	I1002 08:04:34.216990  501823 start.go:83] releasing machines lock for "default-k8s-diff-port-417078", held for 11.118555687s
	I1002 08:04:34.217060  501823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417078
	I1002 08:04:34.237811  501823 ssh_runner.go:195] Run: cat /version.json
	I1002 08:04:34.237881  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:34.238147  501823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:04:34.238205  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:34.258416  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:34.259733  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:34.446469  501823 ssh_runner.go:195] Run: systemctl --version
	I1002 08:04:34.453009  501823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:04:34.495566  501823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:04:34.499987  501823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:04:34.500098  501823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:04:34.532484  501823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 08:04:34.532509  501823 start.go:495] detecting cgroup driver to use...
	I1002 08:04:34.532575  501823 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:04:34.532652  501823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:04:34.551614  501823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:04:34.565388  501823 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:04:34.565475  501823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:04:34.586432  501823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:04:34.615676  501823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:04:34.745519  501823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:04:34.867754  501823 docker.go:234] disabling docker service ...
	I1002 08:04:34.867866  501823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:04:34.890041  501823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:04:34.904329  501823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:04:35.034333  501823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:04:35.158243  501823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:04:35.173361  501823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:04:35.187826  501823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:04:35.187955  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.197158  501823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:04:35.197275  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.206461  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.215761  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.225711  501823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:04:35.234675  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.244206  501823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.258597  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.268297  501823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:04:35.276052  501823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:04:35.283386  501823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:04:35.403312  501823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:04:35.531506  501823 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:04:35.531611  501823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:04:35.535938  501823 start.go:563] Will wait 60s for crictl version
	I1002 08:04:35.536057  501823 ssh_runner.go:195] Run: which crictl
	I1002 08:04:35.539821  501823 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:04:35.570918  501823 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:04:35.571050  501823 ssh_runner.go:195] Run: crio --version
	I1002 08:04:35.601537  501823 ssh_runner.go:195] Run: crio --version
	I1002 08:04:35.634256  501823 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1002 08:04:32.579403  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:34.580186  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:36.580598  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:35.637109  501823 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:04:35.653657  501823 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 08:04:35.657672  501823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:04:35.667771  501823 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:04:35.667895  501823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:04:35.667960  501823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:04:35.702890  501823 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:04:35.702916  501823 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:04:35.702976  501823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:04:35.733434  501823 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:04:35.733456  501823 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:04:35.733465  501823 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1002 08:04:35.733552  501823 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-417078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:04:35.733638  501823 ssh_runner.go:195] Run: crio config
	I1002 08:04:35.789381  501823 cni.go:84] Creating CNI manager for ""
	I1002 08:04:35.789404  501823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:04:35.789419  501823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:04:35.789470  501823 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-417078 NodeName:default-k8s-diff-port-417078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:04:35.789635  501823 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-417078"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:04:35.789717  501823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:04:35.797674  501823 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:04:35.797800  501823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:04:35.805634  501823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 08:04:35.818364  501823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:04:35.831886  501823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 08:04:35.845697  501823 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:04:35.849567  501823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:04:35.859591  501823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:04:35.969169  501823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:04:35.986400  501823 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078 for IP: 192.168.76.2
	I1002 08:04:35.986474  501823 certs.go:195] generating shared ca certs ...
	I1002 08:04:35.986507  501823 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:35.986691  501823 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:04:35.986763  501823 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:04:35.986788  501823 certs.go:257] generating profile certs ...
	I1002 08:04:35.986878  501823 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.key
	I1002 08:04:35.986917  501823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt with IP's: []
	I1002 08:04:36.605918  501823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt ...
	I1002 08:04:36.605954  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: {Name:mka6519ecd3e36180c67d7823d0cae5651c17da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:36.606161  501823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.key ...
	I1002 08:04:36.606179  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.key: {Name:mkfcd26be7e79341b1876c8c57887f885d206b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:36.606277  501823 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key.f1b5b37f
	I1002 08:04:36.606296  501823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt.f1b5b37f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 08:04:36.706088  501823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt.f1b5b37f ...
	I1002 08:04:36.706116  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt.f1b5b37f: {Name:mk145e047376f7f1354ede99cf1be0b847606ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:36.706295  501823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key.f1b5b37f ...
	I1002 08:04:36.706310  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key.f1b5b37f: {Name:mkd7049910df419135a4b1866b4c9383d9092153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:36.706393  501823 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt.f1b5b37f -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt
	I1002 08:04:36.706478  501823 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key.f1b5b37f -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key
	I1002 08:04:36.706542  501823 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.key
	I1002 08:04:36.706560  501823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.crt with IP's: []
	I1002 08:04:37.256535  501823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.crt ...
	I1002 08:04:37.256567  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.crt: {Name:mk648b4d57e33b5707041cb91e08b69f449f9de9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:37.256757  501823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.key ...
	I1002 08:04:37.256768  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.key: {Name:mk11b4c9ba8671ce17ac0bb5832cee7279c2b7a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:37.256938  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:04:37.256973  501823 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:04:37.256983  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:04:37.257011  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:04:37.257035  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:04:37.257056  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:04:37.257108  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:04:37.257676  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:04:37.277299  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:04:37.295813  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:04:37.315164  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:04:37.333216  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 08:04:37.351303  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:04:37.371404  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:04:37.392621  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 08:04:37.412550  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:04:37.430740  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:04:37.449751  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:04:37.468115  501823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:04:37.481168  501823 ssh_runner.go:195] Run: openssl version
	I1002 08:04:37.487570  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:04:37.496260  501823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:04:37.500092  501823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:04:37.500187  501823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:04:37.541170  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:04:37.550104  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:04:37.558536  501823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:04:37.562395  501823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:04:37.562465  501823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:04:37.603585  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:04:37.612135  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:04:37.620367  501823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:04:37.624559  501823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:04:37.624644  501823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:04:37.669289  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:04:37.678397  501823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:04:37.682942  501823 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 08:04:37.683004  501823 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:04:37.683074  501823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:04:37.683173  501823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:04:37.710376  501823 cri.go:89] found id: ""
	I1002 08:04:37.710493  501823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:04:37.719286  501823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 08:04:37.733533  501823 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 08:04:37.733656  501823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 08:04:37.741981  501823 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 08:04:37.742060  501823 kubeadm.go:157] found existing configuration files:
	
	I1002 08:04:37.742152  501823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1002 08:04:37.750123  501823 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 08:04:37.750250  501823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 08:04:37.758153  501823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1002 08:04:37.766589  501823 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 08:04:37.766706  501823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 08:04:37.774445  501823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1002 08:04:37.782620  501823 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 08:04:37.782691  501823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 08:04:37.790371  501823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1002 08:04:37.798417  501823 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 08:04:37.798508  501823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 08:04:37.805933  501823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 08:04:37.849426  501823 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 08:04:37.849545  501823 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 08:04:37.874462  501823 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 08:04:37.874585  501823 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 08:04:37.874645  501823 kubeadm.go:318] OS: Linux
	I1002 08:04:37.874737  501823 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 08:04:37.874802  501823 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 08:04:37.874861  501823 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 08:04:37.874922  501823 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 08:04:37.874981  501823 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 08:04:37.875040  501823 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 08:04:37.875119  501823 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 08:04:37.875185  501823 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 08:04:37.875241  501823 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 08:04:37.943897  501823 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 08:04:37.944047  501823 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 08:04:37.944147  501823 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 08:04:37.954817  501823 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1002 08:04:38.581852  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:41.081688  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:37.958130  501823 out.go:252]   - Generating certificates and keys ...
	I1002 08:04:37.958232  501823 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 08:04:37.958309  501823 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 08:04:38.233281  501823 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 08:04:38.425090  501823 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 08:04:38.881032  501823 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 08:04:39.605036  501823 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 08:04:39.793181  501823 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 08:04:39.793545  501823 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-417078 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 08:04:40.112711  501823 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 08:04:40.113094  501823 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-417078 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 08:04:40.352190  501823 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 08:04:40.681976  501823 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 08:04:41.100073  501823 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 08:04:41.100389  501823 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 08:04:41.741166  501823 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 08:04:42.017820  501823 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 08:04:42.550741  501823 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 08:04:43.063828  501823 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 08:04:43.338740  501823 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 08:04:43.339669  501823 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 08:04:43.342511  501823 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1002 08:04:43.581138  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:44.081379  498230 pod_ready.go:94] pod "coredns-66bc5c9577-h88d8" is "Ready"
	I1002 08:04:44.081406  498230 pod_ready.go:86] duration metric: took 41.007162248s for pod "coredns-66bc5c9577-h88d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.085131  498230 pod_ready.go:83] waiting for pod "etcd-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.091020  498230 pod_ready.go:94] pod "etcd-embed-certs-171347" is "Ready"
	I1002 08:04:44.091098  498230 pod_ready.go:86] duration metric: took 5.943111ms for pod "etcd-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.094529  498230 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.100646  498230 pod_ready.go:94] pod "kube-apiserver-embed-certs-171347" is "Ready"
	I1002 08:04:44.100716  498230 pod_ready.go:86] duration metric: took 6.163955ms for pod "kube-apiserver-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.105105  498230 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.278850  498230 pod_ready.go:94] pod "kube-controller-manager-embed-certs-171347" is "Ready"
	I1002 08:04:44.278968  498230 pod_ready.go:86] duration metric: took 173.789165ms for pod "kube-controller-manager-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.477888  498230 pod_ready.go:83] waiting for pod "kube-proxy-jzmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.878832  498230 pod_ready.go:94] pod "kube-proxy-jzmxf" is "Ready"
	I1002 08:04:44.878856  498230 pod_ready.go:86] duration metric: took 400.887088ms for pod "kube-proxy-jzmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:45.078608  498230 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:45.477594  498230 pod_ready.go:94] pod "kube-scheduler-embed-certs-171347" is "Ready"
	I1002 08:04:45.477683  498230 pod_ready.go:86] duration metric: took 399.045233ms for pod "kube-scheduler-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:45.477713  498230 pod_ready.go:40] duration metric: took 42.462923176s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:04:45.577467  498230 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:04:45.580655  498230 out.go:179] * Done! kubectl is now configured to use "embed-certs-171347" cluster and "default" namespace by default
	I1002 08:04:43.345716  501823 out.go:252]   - Booting up control plane ...
	I1002 08:04:43.345817  501823 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 08:04:43.345898  501823 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 08:04:43.346300  501823 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 08:04:43.362880  501823 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 08:04:43.362994  501823 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 08:04:43.373165  501823 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 08:04:43.373517  501823 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 08:04:43.373565  501823 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 08:04:43.507556  501823 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 08:04:43.507682  501823 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 08:04:45.011245  501823 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.502936487s
	I1002 08:04:45.037327  501823 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 08:04:45.037783  501823 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1002 08:04:45.038196  501823 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 08:04:45.039205  501823 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 08:04:49.654687  501823 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.615046824s
	I1002 08:04:52.285177  501823 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.245176108s
	I1002 08:04:53.040700  501823 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.00206343s
	I1002 08:04:53.061095  501823 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 08:04:53.077580  501823 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 08:04:53.094897  501823 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 08:04:53.095543  501823 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-417078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 08:04:53.109531  501823 kubeadm.go:318] [bootstrap-token] Using token: 3sw9ub.irdnukdqoch17m3b
	I1002 08:04:53.112680  501823 out.go:252]   - Configuring RBAC rules ...
	I1002 08:04:53.112823  501823 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 08:04:53.116784  501823 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 08:04:53.131259  501823 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 08:04:53.135362  501823 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 08:04:53.147015  501823 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 08:04:53.152559  501823 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 08:04:53.447221  501823 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 08:04:53.896376  501823 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 08:04:54.451962  501823 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 08:04:54.453434  501823 kubeadm.go:318] 
	I1002 08:04:54.453510  501823 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 08:04:54.453516  501823 kubeadm.go:318] 
	I1002 08:04:54.453596  501823 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 08:04:54.453605  501823 kubeadm.go:318] 
	I1002 08:04:54.453632  501823 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 08:04:54.453694  501823 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 08:04:54.453746  501823 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 08:04:54.453750  501823 kubeadm.go:318] 
	I1002 08:04:54.453807  501823 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 08:04:54.453811  501823 kubeadm.go:318] 
	I1002 08:04:54.453861  501823 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 08:04:54.453883  501823 kubeadm.go:318] 
	I1002 08:04:54.453937  501823 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 08:04:54.454015  501823 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 08:04:54.454123  501823 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 08:04:54.454129  501823 kubeadm.go:318] 
	I1002 08:04:54.454217  501823 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 08:04:54.454297  501823 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 08:04:54.454301  501823 kubeadm.go:318] 
	I1002 08:04:54.454389  501823 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 3sw9ub.irdnukdqoch17m3b \
	I1002 08:04:54.454497  501823 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 08:04:54.454518  501823 kubeadm.go:318] 	--control-plane 
	I1002 08:04:54.454523  501823 kubeadm.go:318] 
	I1002 08:04:54.454611  501823 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 08:04:54.454615  501823 kubeadm.go:318] 
	I1002 08:04:54.454700  501823 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 3sw9ub.irdnukdqoch17m3b \
	I1002 08:04:54.454818  501823 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 08:04:54.458331  501823 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 08:04:54.458567  501823 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 08:04:54.458685  501823 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 08:04:54.458704  501823 cni.go:84] Creating CNI manager for ""
	I1002 08:04:54.458711  501823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:04:54.464027  501823 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 08:04:54.466943  501823 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 08:04:54.471630  501823 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 08:04:54.471656  501823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 08:04:54.501610  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 08:04:54.822989  501823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 08:04:54.823159  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:54.823246  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-417078 minikube.k8s.io/updated_at=2025_10_02T08_04_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=default-k8s-diff-port-417078 minikube.k8s.io/primary=true
	I1002 08:04:54.838088  501823 ops.go:34] apiserver oom_adj: -16
	I1002 08:04:54.990135  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:55.490551  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:55.990488  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:56.490215  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:56.990255  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:57.490747  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.723031312Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5e8d4d21-3b13-469f-a95d-b9bcb705c56c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.726632855Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=fa12f52a-8014-41f2-9b1b-9bfb48fe1291 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.726955426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.739652569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.740007231Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/17861a86efa212fe96345a53b5e6af51af4c170164c3d9dfa4692f2b88499c5b/merged/etc/passwd: no such file or directory"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.740112479Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/17861a86efa212fe96345a53b5e6af51af4c170164c3d9dfa4692f2b88499c5b/merged/etc/group: no such file or directory"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.740989697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.761566281Z" level=info msg="Created container db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68: kube-system/storage-provisioner/storage-provisioner" id=fa12f52a-8014-41f2-9b1b-9bfb48fe1291 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.766637524Z" level=info msg="Starting container: db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68" id=8f98a211-6323-47fb-ba3d-9485e8dce959 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.768965947Z" level=info msg="Started container" PID=1648 containerID=db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68 description=kube-system/storage-provisioner/storage-provisioner id=8f98a211-6323-47fb-ba3d-9485e8dce959 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a597c6b60d14da6ad9420a789a23bf7bc1c6f9075b5d63b9a6cc5f1cf2d8483
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.519519081Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.529762203Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.529799217Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.52981589Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.547191976Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.547230303Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.547257404Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.563274258Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.563513078Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.563596Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.567723013Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.567894756Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.567980156Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.571507688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.571720916Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	db0b849d228fa       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   3a597c6b60d14       storage-provisioner                          kube-system
	036a049d4d170       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           33 seconds ago       Exited              dashboard-metrics-scraper   2                   7506cea014ff0       dashboard-metrics-scraper-6ffb444bf9-jdwcd   kubernetes-dashboard
	85a6c0112fabf       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   1779415ae36a8       kubernetes-dashboard-855c9754f9-lph8n        kubernetes-dashboard
	baed87e4231c7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   af58480e3e35b       busybox                                      default
	a0a6e0d90ca3f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   08f2093c3ae76       coredns-66bc5c9577-h88d8                     kube-system
	600c1f2a64fc2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   82590faa9bcc7       kube-proxy-jzmxf                             kube-system
	fe1a50b0490de       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   3a597c6b60d14       storage-provisioner                          kube-system
	cef0953b0e3f7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   914961e612d63       kindnet-q6rpr                                kube-system
	6f3ca884c1303       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b713ef44919de       kube-apiserver-embed-certs-171347            kube-system
	a3295c18de5cd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   5917a40776297       kube-controller-manager-embed-certs-171347   kube-system
	19e7d5d7bdca5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   af4fc3032430c       etcd-embed-certs-171347                      kube-system
	85a09c19828ce       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c8f1e034d89bb       kube-scheduler-embed-certs-171347            kube-system
	
	
	==> coredns [a0a6e0d90ca3f8c60830a35bbff243ab0113bb82f6b41ee268f2abb9cf210599] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42646 - 48543 "HINFO IN 6587318862410380506.3595378422212825610. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03124595s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-171347
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-171347
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=embed-certs-171347
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_02_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:02:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-171347
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:04:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:04:31 +0000   Thu, 02 Oct 2025 08:02:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:04:31 +0000   Thu, 02 Oct 2025 08:02:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:04:31 +0000   Thu, 02 Oct 2025 08:02:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:04:31 +0000   Thu, 02 Oct 2025 08:03:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-171347
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c252fe6c0ca45dba4ee6e57615acf95
	  System UUID:                73993af2-e810-4ff8-b445-81bcd4ff9d18
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-h88d8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 etcd-embed-certs-171347                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-q6rpr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-embed-certs-171347             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-embed-certs-171347    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-jzmxf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-embed-certs-171347             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jdwcd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lph8n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Warning  CgroupV1                 2m42s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m41s (x8 over 2m42s)  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m41s (x8 over 2m42s)  kubelet          Node embed-certs-171347 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m41s (x8 over 2m42s)  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node embed-certs-171347 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m25s                  node-controller  Node embed-certs-171347 event: Registered Node embed-certs-171347 in Controller
	  Normal   NodeReady                102s                   kubelet          Node embed-certs-171347 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node embed-certs-171347 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node embed-certs-171347 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node embed-certs-171347 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node embed-certs-171347 event: Registered Node embed-certs-171347 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [19e7d5d7bdca5512898a0c121ad4ff851265a3f8cf6c48dddb1e91460e0e5e12] <==
	{"level":"warn","ts":"2025-10-02T08:03:58.310211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.346756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.382232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.410152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.485479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.509353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.533225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.576147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.591695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.637647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.659838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.697265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.726705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.764796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.802952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.822173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.860111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.902278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.936988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.971314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.071239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.104876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.137983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.163160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.219301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54514","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:05:01 up  2:47,  0 user,  load average: 3.59, 3.11, 2.27
	Linux embed-certs-171347 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cef0953b0e3f7851b931442373cc005c869dafa2e5b3791570a189edfeed70be] <==
	I1002 08:04:02.246971       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:04:02.299381       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 08:04:02.299633       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:04:02.299692       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:04:02.299754       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:04:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:04:02.516352       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:04:02.516444       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:04:02.522745       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:04:02.523340       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:04:32.516128       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 08:04:32.517061       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 08:04:32.523559       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 08:04:32.523661       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 08:04:33.823922       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:04:33.823963       1 metrics.go:72] Registering metrics
	I1002 08:04:33.824036       1 controller.go:711] "Syncing nftables rules"
	I1002 08:04:42.519159       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 08:04:42.519225       1 main.go:301] handling current node
	I1002 08:04:52.523226       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 08:04:52.523267       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f3ca884c1303597bf9de27670995129fac9974f29ec3998eefcb79f460f2608] <==
	I1002 08:04:01.176353       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 08:04:01.176727       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 08:04:01.183415       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 08:04:01.183481       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 08:04:01.183639       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 08:04:01.183689       1 policy_source.go:240] refreshing policies
	I1002 08:04:01.190621       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 08:04:01.195701       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:04:01.198910       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 08:04:01.200429       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 08:04:01.201965       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 08:04:01.208450       1 cache.go:39] Caches are synced for autoregister controller
	I1002 08:04:01.210983       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 08:04:01.238417       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:04:01.273029       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 08:04:01.554354       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:04:02.464165       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:04:02.568838       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:04:02.683198       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:04:02.725385       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:04:02.913252       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.139.253"}
	I1002 08:04:02.940978       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.166.183"}
	I1002 08:04:04.860605       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:04:04.909992       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:04:05.021216       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a3295c18de5cd39930de6a29eafc9bfeb208a5f01b6be0d2f865fafae39a8562] <==
	I1002 08:04:04.492172       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 08:04:04.492182       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 08:04:04.495487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 08:04:04.496899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 08:04:04.496984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 08:04:04.496996       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 08:04:04.500403       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 08:04:04.500556       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 08:04:04.500640       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 08:04:04.500686       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 08:04:04.500715       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 08:04:04.502668       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 08:04:04.503885       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 08:04:04.503958       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 08:04:04.504035       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 08:04:04.504273       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 08:04:04.504496       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 08:04:04.504622       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 08:04:04.504702       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 08:04:04.504799       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-171347"
	I1002 08:04:04.504753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 08:04:04.504904       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 08:04:04.505680       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 08:04:04.508035       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 08:04:04.509257       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [600c1f2a64fc29e5faea39640a8f0c02a79132bb36624fecd3cf771143b4199e] <==
	I1002 08:04:02.877278       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:04:03.172551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:04:03.278339       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:04:03.279163       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 08:04:03.279246       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:04:03.373856       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:04:03.373978       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:04:03.378572       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:04:03.378925       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:04:03.379585       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:04:03.381024       1 config.go:200] "Starting service config controller"
	I1002 08:04:03.381125       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:04:03.381227       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:04:03.381261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:04:03.381310       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:04:03.381337       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:04:03.382187       1 config.go:309] "Starting node config controller"
	I1002 08:04:03.382247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:04:03.382277       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:04:03.481557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:04:03.481662       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:04:03.481693       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [85a09c19828ce281864f49326c73b8b58d618d6e28f38bb8d34c435302ffd907] <==
	I1002 08:04:03.000385       1 serving.go:386] Generated self-signed cert in-memory
	I1002 08:04:03.835354       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 08:04:03.835382       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:04:03.840697       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 08:04:03.840741       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 08:04:03.840773       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:04:03.840781       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:04:03.840794       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:04:03.840809       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:04:03.843586       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:04:03.843670       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:04:03.941424       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:04:03.941488       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 08:04:03.941578       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: E1002 08:04:05.107504     780 status_manager.go:1018] "Failed to get status for pod" err="pods \"dashboard-metrics-scraper-6ffb444bf9-jdwcd\" is forbidden: User \"system:node:embed-certs-171347\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'embed-certs-171347' and this object" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd"
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: I1002 08:04:05.250974     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/732dd77c-3a1a-4f39-be41-fee9623149cf-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lph8n\" (UID: \"732dd77c-3a1a-4f39-be41-fee9623149cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lph8n"
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: I1002 08:04:05.251925     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql85w\" (UniqueName: \"kubernetes.io/projected/b89d4134-6e85-430a-81a6-3a5ba9870788-kube-api-access-ql85w\") pod \"dashboard-metrics-scraper-6ffb444bf9-jdwcd\" (UID: \"b89d4134-6e85-430a-81a6-3a5ba9870788\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd"
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: I1002 08:04:05.251985     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b89d4134-6e85-430a-81a6-3a5ba9870788-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-jdwcd\" (UID: \"b89d4134-6e85-430a-81a6-3a5ba9870788\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd"
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: I1002 08:04:05.252068     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xztrr\" (UniqueName: \"kubernetes.io/projected/732dd77c-3a1a-4f39-be41-fee9623149cf-kube-api-access-xztrr\") pod \"kubernetes-dashboard-855c9754f9-lph8n\" (UID: \"732dd77c-3a1a-4f39-be41-fee9623149cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lph8n"
	Oct 02 08:04:06 embed-certs-171347 kubelet[780]: W1002 08:04:06.366968     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/crio-1779415ae36a8b81471d578ee6c5c347250601070058debc2f5fe01b8e442532 WatchSource:0}: Error finding container 1779415ae36a8b81471d578ee6c5c347250601070058debc2f5fe01b8e442532: Status 404 returned error can't find the container with id 1779415ae36a8b81471d578ee6c5c347250601070058debc2f5fe01b8e442532
	Oct 02 08:04:11 embed-certs-171347 kubelet[780]: I1002 08:04:11.640710     780 scope.go:117] "RemoveContainer" containerID="8716230077e27fe5f2dea32eef238683d1245cf13dc6fc85c8b8566c7a8da18e"
	Oct 02 08:04:12 embed-certs-171347 kubelet[780]: I1002 08:04:12.640746     780 scope.go:117] "RemoveContainer" containerID="8716230077e27fe5f2dea32eef238683d1245cf13dc6fc85c8b8566c7a8da18e"
	Oct 02 08:04:12 embed-certs-171347 kubelet[780]: I1002 08:04:12.641676     780 scope.go:117] "RemoveContainer" containerID="19445b88f10214b2add34015a72f1734834ae9f0a1098e8d87de0b581f6b7e30"
	Oct 02 08:04:12 embed-certs-171347 kubelet[780]: E1002 08:04:12.641970     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:16 embed-certs-171347 kubelet[780]: I1002 08:04:16.299751     780 scope.go:117] "RemoveContainer" containerID="19445b88f10214b2add34015a72f1734834ae9f0a1098e8d87de0b581f6b7e30"
	Oct 02 08:04:16 embed-certs-171347 kubelet[780]: E1002 08:04:16.299996     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: I1002 08:04:27.505355     780 scope.go:117] "RemoveContainer" containerID="19445b88f10214b2add34015a72f1734834ae9f0a1098e8d87de0b581f6b7e30"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: I1002 08:04:27.698218     780 scope.go:117] "RemoveContainer" containerID="19445b88f10214b2add34015a72f1734834ae9f0a1098e8d87de0b581f6b7e30"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: I1002 08:04:27.698413     780 scope.go:117] "RemoveContainer" containerID="036a049d4d170fc6f94d917d0b70ffeec2e78e29355403e588a6a8c388ef33f1"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: E1002 08:04:27.698581     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: I1002 08:04:27.740109     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lph8n" podStartSLOduration=12.097069007 podStartE2EDuration="22.740091531s" podCreationTimestamp="2025-10-02 08:04:05 +0000 UTC" firstStartedPulling="2025-10-02 08:04:06.374726286 +0000 UTC m=+12.081809218" lastFinishedPulling="2025-10-02 08:04:17.01774881 +0000 UTC m=+22.724831742" observedRunningTime="2025-10-02 08:04:17.693239437 +0000 UTC m=+23.400322369" watchObservedRunningTime="2025-10-02 08:04:27.740091531 +0000 UTC m=+33.447174463"
	Oct 02 08:04:33 embed-certs-171347 kubelet[780]: I1002 08:04:33.721121     780 scope.go:117] "RemoveContainer" containerID="fe1a50b0490de1a057bb2439be07b143442b3be835d66e0e05add86a488991b7"
	Oct 02 08:04:36 embed-certs-171347 kubelet[780]: I1002 08:04:36.299537     780 scope.go:117] "RemoveContainer" containerID="036a049d4d170fc6f94d917d0b70ffeec2e78e29355403e588a6a8c388ef33f1"
	Oct 02 08:04:36 embed-certs-171347 kubelet[780]: E1002 08:04:36.299729     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:47 embed-certs-171347 kubelet[780]: I1002 08:04:47.507648     780 scope.go:117] "RemoveContainer" containerID="036a049d4d170fc6f94d917d0b70ffeec2e78e29355403e588a6a8c388ef33f1"
	Oct 02 08:04:47 embed-certs-171347 kubelet[780]: E1002 08:04:47.507974     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:58 embed-certs-171347 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:04:58 embed-certs-171347 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:04:58 embed-certs-171347 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [85a6c0112fabfd159eed64bcdbc0d532333c469b91bb51b8b81cceaa57497dfa] <==
	2025/10/02 08:04:17 Using namespace: kubernetes-dashboard
	2025/10/02 08:04:17 Using in-cluster config to connect to apiserver
	2025/10/02 08:04:17 Using secret token for csrf signing
	2025/10/02 08:04:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 08:04:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 08:04:17 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 08:04:17 Generating JWE encryption key
	2025/10/02 08:04:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 08:04:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 08:04:18 Initializing JWE encryption key from synchronized object
	2025/10/02 08:04:18 Creating in-cluster Sidecar client
	2025/10/02 08:04:18 Serving insecurely on HTTP port: 9090
	2025/10/02 08:04:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:04:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:04:17 Starting overwatch
	
	
	==> storage-provisioner [db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68] <==
	I1002 08:04:33.801475       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:04:33.801610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 08:04:33.805387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:37.260409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:41.521439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:45.120620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:48.175614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:51.198608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:51.208106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:04:51.208344       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:04:51.209658       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-171347_31d1bd72-87a6-4577-a779-321f920f8894!
	I1002 08:04:51.210561       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5810a4b1-cc04-4b0a-996b-984738abc721", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-171347_31d1bd72-87a6-4577-a779-321f920f8894 became leader
	W1002 08:04:51.217640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:51.223810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:04:51.310447       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-171347_31d1bd72-87a6-4577-a779-321f920f8894!
	W1002 08:04:53.226239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:53.232940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:55.235974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:55.240133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:57.243329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:57.248400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:59.251868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:59.269503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:01.272867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:01.285184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe1a50b0490de1a057bb2439be07b143442b3be835d66e0e05add86a488991b7] <==
	I1002 08:04:02.802470       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 08:04:32.810763       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-171347 -n embed-certs-171347
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-171347 -n embed-certs-171347: exit status 2 (538.50006ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-171347 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-171347
helpers_test.go:243: (dbg) docker inspect embed-certs-171347:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa",
	        "Created": "2025-10-02T08:02:00.578455149Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 498358,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:03:47.492578568Z",
	            "FinishedAt": "2025-10-02T08:03:46.439404398Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/hostname",
	        "HostsPath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/hosts",
	        "LogPath": "/var/lib/docker/containers/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa-json.log",
	        "Name": "/embed-certs-171347",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-171347:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-171347",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa",
	                "LowerDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c92ba62aeaf74f1e329cdefec79ac5294c1ee446a93853845f2f03c39bb325b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-171347",
	                "Source": "/var/lib/docker/volumes/embed-certs-171347/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-171347",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-171347",
	                "name.minikube.sigs.k8s.io": "embed-certs-171347",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7556ba907aaf720a1498b85d8d8dee950c078f679696fd827c10f9855bda88b8",
	            "SandboxKey": "/var/run/docker/netns/7556ba907aaf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-171347": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:13:cc:c8:ec:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02e39ca8e594ec82c902deecf74b9a14d44881e9835232c2f729a3d1bc104bcc",
	                    "EndpointID": "5b6c6499a7c6289b6110e44f904fbd4bd35f5b09a64b3c061e7e83456b9d18bb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-171347",
	                        "411784c5c3f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171347 -n embed-certs-171347
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171347 -n embed-certs-171347: exit status 2 (365.593283ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-171347 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-171347 logs -n 25: (1.320422643s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-356986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:00 UTC │
	│ start   │ -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:00 UTC │ 02 Oct 25 08:01 UTC │
	│ image   │ old-k8s-version-356986 image list --format=json                                                                                                                                                                                               │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ pause   │ -p old-k8s-version-356986 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │                     │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:02 UTC │
	│ delete  │ -p cert-expiration-759246                                                                                                                                                                                                                     │ cert-expiration-759246       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │                     │
	│ stop    │ -p no-preload-604182 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p no-preload-604182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ stop    │ -p embed-certs-171347 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-171347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ image   │ no-preload-604182 image list --format=json                                                                                                                                                                                                    │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p disable-driver-mounts-466206                                                                                                                                                                                                               │ disable-driver-mounts-466206 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ image   │ embed-certs-171347 image list --format=json                                                                                                                                                                                                   │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p embed-certs-171347 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:04:22
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:04:22.860282  501823 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:04:22.860492  501823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:04:22.860522  501823 out.go:374] Setting ErrFile to fd 2...
	I1002 08:04:22.860542  501823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:04:22.860958  501823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:04:22.861971  501823 out.go:368] Setting JSON to false
	I1002 08:04:22.863000  501823 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10014,"bootTime":1759382249,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:04:22.863072  501823 start.go:140] virtualization:  
	I1002 08:04:22.866925  501823 out.go:179] * [default-k8s-diff-port-417078] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:04:22.870790  501823 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:04:22.870964  501823 notify.go:220] Checking for updates...
	I1002 08:04:22.876733  501823 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:04:22.879695  501823 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:04:22.882675  501823 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:04:22.885522  501823 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:04:22.888512  501823 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:04:22.892077  501823 config.go:182] Loaded profile config "embed-certs-171347": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:04:22.892210  501823 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:04:22.927308  501823 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:04:22.927477  501823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:04:22.986024  501823 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:04:22.976277267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:04:22.986140  501823 docker.go:318] overlay module found
	I1002 08:04:22.989212  501823 out.go:179] * Using the docker driver based on user configuration
	I1002 08:04:22.992149  501823 start.go:304] selected driver: docker
	I1002 08:04:22.992173  501823 start.go:924] validating driver "docker" against <nil>
	I1002 08:04:22.992187  501823 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:04:22.992923  501823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:04:23.053338  501823 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:04:23.043636296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:04:23.053493  501823 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 08:04:23.053734  501823 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:04:23.056748  501823 out.go:179] * Using Docker driver with root privileges
	I1002 08:04:23.059792  501823 cni.go:84] Creating CNI manager for ""
	I1002 08:04:23.059892  501823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:04:23.059908  501823 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 08:04:23.060008  501823 start.go:348] cluster config:
	{Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:04:23.063399  501823 out.go:179] * Starting "default-k8s-diff-port-417078" primary control-plane node in "default-k8s-diff-port-417078" cluster
	I1002 08:04:23.066373  501823 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:04:23.069433  501823 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:04:23.072386  501823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:04:23.072456  501823 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:04:23.072467  501823 cache.go:58] Caching tarball of preloaded images
	I1002 08:04:23.072501  501823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:04:23.072674  501823 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:04:23.072696  501823 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:04:23.072862  501823 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/config.json ...
	I1002 08:04:23.072904  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/config.json: {Name:mk5bd9a340e6b1688dec5bc4670402c65cc73620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:23.098230  501823 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:04:23.098255  501823 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:04:23.098283  501823 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:04:23.098306  501823 start.go:360] acquireMachinesLock for default-k8s-diff-port-417078: {Name:mk71638280421d86b548f4ec42a5f6c5c61e1f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:04:23.098422  501823 start.go:364] duration metric: took 95.501µs to acquireMachinesLock for "default-k8s-diff-port-417078"
	I1002 08:04:23.098453  501823 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:04:23.098532  501823 start.go:125] createHost starting for "" (driver="docker")
	W1002 08:04:23.081317  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:25.579832  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:23.102357  501823 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 08:04:23.102641  501823 start.go:159] libmachine.API.Create for "default-k8s-diff-port-417078" (driver="docker")
	I1002 08:04:23.102688  501823 client.go:168] LocalClient.Create starting
	I1002 08:04:23.102767  501823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 08:04:23.102801  501823 main.go:141] libmachine: Decoding PEM data...
	I1002 08:04:23.102814  501823 main.go:141] libmachine: Parsing certificate...
	I1002 08:04:23.102875  501823 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 08:04:23.102899  501823 main.go:141] libmachine: Decoding PEM data...
	I1002 08:04:23.102918  501823 main.go:141] libmachine: Parsing certificate...
	I1002 08:04:23.103319  501823 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 08:04:23.121916  501823 cli_runner.go:211] docker network inspect default-k8s-diff-port-417078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 08:04:23.121999  501823 network_create.go:284] running [docker network inspect default-k8s-diff-port-417078] to gather additional debugging logs...
	I1002 08:04:23.122016  501823 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417078
	W1002 08:04:23.148336  501823 cli_runner.go:211] docker network inspect default-k8s-diff-port-417078 returned with exit code 1
	I1002 08:04:23.148367  501823 network_create.go:287] error running [docker network inspect default-k8s-diff-port-417078]: docker network inspect default-k8s-diff-port-417078: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-417078 not found
	I1002 08:04:23.148381  501823 network_create.go:289] output of [docker network inspect default-k8s-diff-port-417078]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-417078 not found
	
	** /stderr **
	I1002 08:04:23.148497  501823 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:04:23.165787  501823 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 08:04:23.166176  501823 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 08:04:23.166383  501823 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 08:04:23.166935  501823 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7a40}
	I1002 08:04:23.166967  501823 network_create.go:124] attempt to create docker network default-k8s-diff-port-417078 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 08:04:23.167039  501823 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-417078 default-k8s-diff-port-417078
	I1002 08:04:23.238347  501823 network_create.go:108] docker network default-k8s-diff-port-417078 192.168.76.0/24 created
	I1002 08:04:23.238393  501823 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-417078" container
	I1002 08:04:23.238491  501823 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 08:04:23.254962  501823 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-417078 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-417078 --label created_by.minikube.sigs.k8s.io=true
	I1002 08:04:23.273102  501823 oci.go:103] Successfully created a docker volume default-k8s-diff-port-417078
	I1002 08:04:23.273187  501823 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-417078-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-417078 --entrypoint /usr/bin/test -v default-k8s-diff-port-417078:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 08:04:23.834615  501823 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-417078
	I1002 08:04:23.834674  501823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:04:23.834696  501823 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 08:04:23.834768  501823 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-417078:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1002 08:04:27.581536  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:30.084872  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:28.312128  501823 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-417078:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.477285476s)
	I1002 08:04:28.312161  501823 kic.go:203] duration metric: took 4.47746147s to extract preloaded images to volume ...
	W1002 08:04:28.312295  501823 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 08:04:28.312411  501823 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 08:04:28.384201  501823 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-417078 --name default-k8s-diff-port-417078 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-417078 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-417078 --network default-k8s-diff-port-417078 --ip 192.168.76.2 --volume default-k8s-diff-port-417078:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 08:04:28.685515  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Running}}
	I1002 08:04:28.715703  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:04:28.737998  501823 cli_runner.go:164] Run: docker exec default-k8s-diff-port-417078 stat /var/lib/dpkg/alternatives/iptables
	I1002 08:04:28.793149  501823 oci.go:144] the created container "default-k8s-diff-port-417078" has a running status.
	I1002 08:04:28.793188  501823 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa...
	I1002 08:04:29.457275  501823 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 08:04:29.477853  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:04:29.496477  501823 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 08:04:29.496501  501823 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-417078 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 08:04:29.537166  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:04:29.556765  501823 machine.go:93] provisionDockerMachine start ...
	I1002 08:04:29.556880  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:29.574419  501823 main.go:141] libmachine: Using SSH client type: native
	I1002 08:04:29.574755  501823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1002 08:04:29.574780  501823 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:04:29.579483  501823 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 08:04:32.713484  501823 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417078
	
	I1002 08:04:32.713509  501823 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-417078"
	I1002 08:04:32.713620  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:32.732077  501823 main.go:141] libmachine: Using SSH client type: native
	I1002 08:04:32.732385  501823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1002 08:04:32.732410  501823 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-417078 && echo "default-k8s-diff-port-417078" | sudo tee /etc/hostname
	I1002 08:04:32.878534  501823 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417078
	
	I1002 08:04:32.878619  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:32.898177  501823 main.go:141] libmachine: Using SSH client type: native
	I1002 08:04:32.898485  501823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1002 08:04:32.898510  501823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-417078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-417078/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-417078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:04:33.032374  501823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:04:33.032410  501823 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:04:33.032458  501823 ubuntu.go:190] setting up certificates
	I1002 08:04:33.032469  501823 provision.go:84] configureAuth start
	I1002 08:04:33.032553  501823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417078
	I1002 08:04:33.052085  501823 provision.go:143] copyHostCerts
	I1002 08:04:33.052159  501823 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:04:33.052174  501823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:04:33.052257  501823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:04:33.052350  501823 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:04:33.052362  501823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:04:33.052390  501823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:04:33.052449  501823 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:04:33.052459  501823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:04:33.052484  501823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:04:33.052538  501823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-417078 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-417078 localhost minikube]
	I1002 08:04:33.338322  501823 provision.go:177] copyRemoteCerts
	I1002 08:04:33.338397  501823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:04:33.338444  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:33.356259  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:33.459439  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 08:04:33.479741  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:04:33.502402  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 08:04:33.526133  501823 provision.go:87] duration metric: took 493.647098ms to configureAuth
	I1002 08:04:33.526253  501823 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:04:33.526456  501823 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:04:33.526595  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:33.543800  501823 main.go:141] libmachine: Using SSH client type: native
	I1002 08:04:33.544112  501823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1002 08:04:33.544134  501823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:04:33.920440  501823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:04:33.920488  501823 machine.go:96] duration metric: took 4.36369803s to provisionDockerMachine
	I1002 08:04:33.920498  501823 client.go:171] duration metric: took 10.817800091s to LocalClient.Create
	I1002 08:04:33.920532  501823 start.go:167] duration metric: took 10.817878689s to libmachine.API.Create "default-k8s-diff-port-417078"
	I1002 08:04:33.920546  501823 start.go:293] postStartSetup for "default-k8s-diff-port-417078" (driver="docker")
	I1002 08:04:33.920556  501823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:04:33.920629  501823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:04:33.920690  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:33.939804  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:34.039687  501823 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:04:34.043396  501823 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:04:34.043429  501823 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:04:34.043442  501823 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:04:34.043541  501823 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:04:34.043694  501823 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:04:34.043808  501823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:04:34.051804  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:04:34.071560  501823 start.go:296] duration metric: took 150.99834ms for postStartSetup
	I1002 08:04:34.071945  501823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417078
	I1002 08:04:34.093507  501823 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/config.json ...
	I1002 08:04:34.093795  501823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:04:34.093844  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:34.111986  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:34.212053  501823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:04:34.216964  501823 start.go:128] duration metric: took 11.118415198s to createHost
	I1002 08:04:34.216990  501823 start.go:83] releasing machines lock for "default-k8s-diff-port-417078", held for 11.118555687s
	I1002 08:04:34.217060  501823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417078
	I1002 08:04:34.237811  501823 ssh_runner.go:195] Run: cat /version.json
	I1002 08:04:34.237881  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:34.238147  501823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:04:34.238205  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:04:34.258416  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:34.259733  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:04:34.446469  501823 ssh_runner.go:195] Run: systemctl --version
	I1002 08:04:34.453009  501823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:04:34.495566  501823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:04:34.499987  501823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:04:34.500098  501823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:04:34.532484  501823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 08:04:34.532509  501823 start.go:495] detecting cgroup driver to use...
	I1002 08:04:34.532575  501823 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:04:34.532652  501823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:04:34.551614  501823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:04:34.565388  501823 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:04:34.565475  501823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:04:34.586432  501823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:04:34.615676  501823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:04:34.745519  501823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:04:34.867754  501823 docker.go:234] disabling docker service ...
	I1002 08:04:34.867866  501823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:04:34.890041  501823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:04:34.904329  501823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:04:35.034333  501823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:04:35.158243  501823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:04:35.173361  501823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:04:35.187826  501823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:04:35.187955  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.197158  501823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:04:35.197275  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.206461  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.215761  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.225711  501823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:04:35.234675  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.244206  501823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.258597  501823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:04:35.268297  501823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:04:35.276052  501823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:04:35.283386  501823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:04:35.403312  501823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:04:35.531506  501823 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:04:35.531611  501823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:04:35.535938  501823 start.go:563] Will wait 60s for crictl version
	I1002 08:04:35.536057  501823 ssh_runner.go:195] Run: which crictl
	I1002 08:04:35.539821  501823 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:04:35.570918  501823 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:04:35.571050  501823 ssh_runner.go:195] Run: crio --version
	I1002 08:04:35.601537  501823 ssh_runner.go:195] Run: crio --version
	I1002 08:04:35.634256  501823 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1002 08:04:32.579403  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:34.580186  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:36.580598  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:35.637109  501823 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-417078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:04:35.653657  501823 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 08:04:35.657672  501823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:04:35.667771  501823 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:04:35.667895  501823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:04:35.667960  501823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:04:35.702890  501823 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:04:35.702916  501823 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:04:35.702976  501823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:04:35.733434  501823 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:04:35.733456  501823 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:04:35.733465  501823 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1002 08:04:35.733552  501823 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-417078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:04:35.733638  501823 ssh_runner.go:195] Run: crio config
	I1002 08:04:35.789381  501823 cni.go:84] Creating CNI manager for ""
	I1002 08:04:35.789404  501823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:04:35.789419  501823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:04:35.789470  501823 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-417078 NodeName:default-k8s-diff-port-417078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:04:35.789635  501823 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-417078"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:04:35.789717  501823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:04:35.797674  501823 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:04:35.797800  501823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:04:35.805634  501823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 08:04:35.818364  501823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:04:35.831886  501823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 08:04:35.845697  501823 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:04:35.849567  501823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:04:35.859591  501823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:04:35.969169  501823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:04:35.986400  501823 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078 for IP: 192.168.76.2
	I1002 08:04:35.986474  501823 certs.go:195] generating shared ca certs ...
	I1002 08:04:35.986507  501823 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:35.986691  501823 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:04:35.986763  501823 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:04:35.986788  501823 certs.go:257] generating profile certs ...
	I1002 08:04:35.986878  501823 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.key
	I1002 08:04:35.986917  501823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt with IP's: []
	I1002 08:04:36.605918  501823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt ...
	I1002 08:04:36.605954  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: {Name:mka6519ecd3e36180c67d7823d0cae5651c17da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:36.606161  501823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.key ...
	I1002 08:04:36.606179  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.key: {Name:mkfcd26be7e79341b1876c8c57887f885d206b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:36.606277  501823 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key.f1b5b37f
	I1002 08:04:36.606296  501823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt.f1b5b37f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 08:04:36.706088  501823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt.f1b5b37f ...
	I1002 08:04:36.706116  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt.f1b5b37f: {Name:mk145e047376f7f1354ede99cf1be0b847606ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:36.706295  501823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key.f1b5b37f ...
	I1002 08:04:36.706310  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key.f1b5b37f: {Name:mkd7049910df419135a4b1866b4c9383d9092153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:36.706393  501823 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt.f1b5b37f -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt
	I1002 08:04:36.706478  501823 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key.f1b5b37f -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key
	I1002 08:04:36.706542  501823 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.key
	I1002 08:04:36.706560  501823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.crt with IP's: []
	I1002 08:04:37.256535  501823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.crt ...
	I1002 08:04:37.256567  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.crt: {Name:mk648b4d57e33b5707041cb91e08b69f449f9de9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:37.256757  501823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.key ...
	I1002 08:04:37.256768  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.key: {Name:mk11b4c9ba8671ce17ac0bb5832cee7279c2b7a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:04:37.256938  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:04:37.256973  501823 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:04:37.256983  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:04:37.257011  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:04:37.257035  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:04:37.257056  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:04:37.257108  501823 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:04:37.257676  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:04:37.277299  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:04:37.295813  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:04:37.315164  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:04:37.333216  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 08:04:37.351303  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:04:37.371404  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:04:37.392621  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 08:04:37.412550  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:04:37.430740  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:04:37.449751  501823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:04:37.468115  501823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:04:37.481168  501823 ssh_runner.go:195] Run: openssl version
	I1002 08:04:37.487570  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:04:37.496260  501823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:04:37.500092  501823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:04:37.500187  501823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:04:37.541170  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:04:37.550104  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:04:37.558536  501823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:04:37.562395  501823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:04:37.562465  501823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:04:37.603585  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:04:37.612135  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:04:37.620367  501823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:04:37.624559  501823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:04:37.624644  501823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:04:37.669289  501823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:04:37.678397  501823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:04:37.682942  501823 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 08:04:37.683004  501823 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:04:37.683074  501823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:04:37.683173  501823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:04:37.710376  501823 cri.go:89] found id: ""
	I1002 08:04:37.710493  501823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:04:37.719286  501823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 08:04:37.733533  501823 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 08:04:37.733656  501823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 08:04:37.741981  501823 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 08:04:37.742060  501823 kubeadm.go:157] found existing configuration files:
	
	I1002 08:04:37.742152  501823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1002 08:04:37.750123  501823 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 08:04:37.750250  501823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 08:04:37.758153  501823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1002 08:04:37.766589  501823 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 08:04:37.766706  501823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 08:04:37.774445  501823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1002 08:04:37.782620  501823 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 08:04:37.782691  501823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 08:04:37.790371  501823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1002 08:04:37.798417  501823 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 08:04:37.798508  501823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 08:04:37.805933  501823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 08:04:37.849426  501823 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 08:04:37.849545  501823 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 08:04:37.874462  501823 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 08:04:37.874585  501823 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 08:04:37.874645  501823 kubeadm.go:318] OS: Linux
	I1002 08:04:37.874737  501823 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 08:04:37.874802  501823 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 08:04:37.874861  501823 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 08:04:37.874922  501823 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 08:04:37.874981  501823 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 08:04:37.875040  501823 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 08:04:37.875119  501823 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 08:04:37.875185  501823 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 08:04:37.875241  501823 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 08:04:37.943897  501823 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 08:04:37.944047  501823 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 08:04:37.944147  501823 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 08:04:37.954817  501823 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1002 08:04:38.581852  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	W1002 08:04:41.081688  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:37.958130  501823 out.go:252]   - Generating certificates and keys ...
	I1002 08:04:37.958232  501823 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 08:04:37.958309  501823 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 08:04:38.233281  501823 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 08:04:38.425090  501823 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 08:04:38.881032  501823 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 08:04:39.605036  501823 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 08:04:39.793181  501823 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 08:04:39.793545  501823 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-417078 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 08:04:40.112711  501823 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 08:04:40.113094  501823 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-417078 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 08:04:40.352190  501823 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 08:04:40.681976  501823 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 08:04:41.100073  501823 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 08:04:41.100389  501823 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 08:04:41.741166  501823 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 08:04:42.017820  501823 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 08:04:42.550741  501823 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 08:04:43.063828  501823 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 08:04:43.338740  501823 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 08:04:43.339669  501823 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 08:04:43.342511  501823 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1002 08:04:43.581138  498230 pod_ready.go:104] pod "coredns-66bc5c9577-h88d8" is not "Ready", error: <nil>
	I1002 08:04:44.081379  498230 pod_ready.go:94] pod "coredns-66bc5c9577-h88d8" is "Ready"
	I1002 08:04:44.081406  498230 pod_ready.go:86] duration metric: took 41.007162248s for pod "coredns-66bc5c9577-h88d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.085131  498230 pod_ready.go:83] waiting for pod "etcd-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.091020  498230 pod_ready.go:94] pod "etcd-embed-certs-171347" is "Ready"
	I1002 08:04:44.091098  498230 pod_ready.go:86] duration metric: took 5.943111ms for pod "etcd-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.094529  498230 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.100646  498230 pod_ready.go:94] pod "kube-apiserver-embed-certs-171347" is "Ready"
	I1002 08:04:44.100716  498230 pod_ready.go:86] duration metric: took 6.163955ms for pod "kube-apiserver-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.105105  498230 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.278850  498230 pod_ready.go:94] pod "kube-controller-manager-embed-certs-171347" is "Ready"
	I1002 08:04:44.278968  498230 pod_ready.go:86] duration metric: took 173.789165ms for pod "kube-controller-manager-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.477888  498230 pod_ready.go:83] waiting for pod "kube-proxy-jzmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:44.878832  498230 pod_ready.go:94] pod "kube-proxy-jzmxf" is "Ready"
	I1002 08:04:44.878856  498230 pod_ready.go:86] duration metric: took 400.887088ms for pod "kube-proxy-jzmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:45.078608  498230 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:45.477594  498230 pod_ready.go:94] pod "kube-scheduler-embed-certs-171347" is "Ready"
	I1002 08:04:45.477683  498230 pod_ready.go:86] duration metric: took 399.045233ms for pod "kube-scheduler-embed-certs-171347" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:04:45.477713  498230 pod_ready.go:40] duration metric: took 42.462923176s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:04:45.577467  498230 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:04:45.580655  498230 out.go:179] * Done! kubectl is now configured to use "embed-certs-171347" cluster and "default" namespace by default
	I1002 08:04:43.345716  501823 out.go:252]   - Booting up control plane ...
	I1002 08:04:43.345817  501823 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 08:04:43.345898  501823 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 08:04:43.346300  501823 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 08:04:43.362880  501823 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 08:04:43.362994  501823 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 08:04:43.373165  501823 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 08:04:43.373517  501823 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 08:04:43.373565  501823 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 08:04:43.507556  501823 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 08:04:43.507682  501823 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 08:04:45.011245  501823 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.502936487s
	I1002 08:04:45.037327  501823 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 08:04:45.037783  501823 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1002 08:04:45.038196  501823 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 08:04:45.039205  501823 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 08:04:49.654687  501823 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.615046824s
	I1002 08:04:52.285177  501823 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.245176108s
	I1002 08:04:53.040700  501823 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.00206343s
	I1002 08:04:53.061095  501823 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 08:04:53.077580  501823 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 08:04:53.094897  501823 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 08:04:53.095543  501823 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-417078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 08:04:53.109531  501823 kubeadm.go:318] [bootstrap-token] Using token: 3sw9ub.irdnukdqoch17m3b
	I1002 08:04:53.112680  501823 out.go:252]   - Configuring RBAC rules ...
	I1002 08:04:53.112823  501823 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 08:04:53.116784  501823 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 08:04:53.131259  501823 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 08:04:53.135362  501823 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 08:04:53.147015  501823 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 08:04:53.152559  501823 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 08:04:53.447221  501823 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 08:04:53.896376  501823 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 08:04:54.451962  501823 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 08:04:54.453434  501823 kubeadm.go:318] 
	I1002 08:04:54.453510  501823 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 08:04:54.453516  501823 kubeadm.go:318] 
	I1002 08:04:54.453596  501823 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 08:04:54.453605  501823 kubeadm.go:318] 
	I1002 08:04:54.453632  501823 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 08:04:54.453694  501823 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 08:04:54.453746  501823 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 08:04:54.453750  501823 kubeadm.go:318] 
	I1002 08:04:54.453807  501823 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 08:04:54.453811  501823 kubeadm.go:318] 
	I1002 08:04:54.453861  501823 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 08:04:54.453883  501823 kubeadm.go:318] 
	I1002 08:04:54.453937  501823 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 08:04:54.454015  501823 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 08:04:54.454123  501823 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 08:04:54.454129  501823 kubeadm.go:318] 
	I1002 08:04:54.454217  501823 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 08:04:54.454297  501823 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 08:04:54.454301  501823 kubeadm.go:318] 
	I1002 08:04:54.454389  501823 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 3sw9ub.irdnukdqoch17m3b \
	I1002 08:04:54.454497  501823 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 08:04:54.454518  501823 kubeadm.go:318] 	--control-plane 
	I1002 08:04:54.454523  501823 kubeadm.go:318] 
	I1002 08:04:54.454611  501823 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 08:04:54.454615  501823 kubeadm.go:318] 
	I1002 08:04:54.454700  501823 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 3sw9ub.irdnukdqoch17m3b \
	I1002 08:04:54.454818  501823 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 08:04:54.458331  501823 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 08:04:54.458567  501823 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 08:04:54.458685  501823 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 08:04:54.458704  501823 cni.go:84] Creating CNI manager for ""
	I1002 08:04:54.458711  501823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:04:54.464027  501823 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 08:04:54.466943  501823 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 08:04:54.471630  501823 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 08:04:54.471656  501823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 08:04:54.501610  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 08:04:54.822989  501823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 08:04:54.823159  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:54.823246  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-417078 minikube.k8s.io/updated_at=2025_10_02T08_04_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=default-k8s-diff-port-417078 minikube.k8s.io/primary=true
	I1002 08:04:54.838088  501823 ops.go:34] apiserver oom_adj: -16
	I1002 08:04:54.990135  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:55.490551  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:55.990488  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:56.490215  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:56.990255  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:57.490747  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:57.991026  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:58.490741  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:58.991062  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:59.491221  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:04:59.991122  501823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:00.301441  501823 kubeadm.go:1113] duration metric: took 5.478352296s to wait for elevateKubeSystemPrivileges
	I1002 08:05:00.301489  501823 kubeadm.go:402] duration metric: took 22.618488301s to StartCluster
	I1002 08:05:00.301520  501823 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:00.301597  501823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:05:00.303392  501823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:00.303703  501823 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:05:00.303845  501823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 08:05:00.304169  501823 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:05:00.304211  501823 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:05:00.304289  501823 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-417078"
	I1002 08:05:00.304308  501823 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-417078"
	I1002 08:05:00.304335  501823 host.go:66] Checking if "default-k8s-diff-port-417078" exists ...
	I1002 08:05:00.305129  501823 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-417078"
	I1002 08:05:00.305150  501823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-417078"
	I1002 08:05:00.305652  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:05:00.306614  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:05:00.314655  501823 out.go:179] * Verifying Kubernetes components...
	I1002 08:05:00.319296  501823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:05:00.374915  501823 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-417078"
	I1002 08:05:00.375212  501823 host.go:66] Checking if "default-k8s-diff-port-417078" exists ...
	I1002 08:05:00.375906  501823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:05:00.402428  501823 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:05:00.415775  501823 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:05:00.415804  501823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:05:00.415881  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:05:00.431454  501823 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:05:00.431486  501823 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:05:00.431560  501823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:05:00.461234  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:05:00.503893  501823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:05:01.014451  501823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:05:01.080916  501823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:05:01.217791  501823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 08:05:01.217981  501823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:05:02.320589  501823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.239585261s)
	I1002 08:05:02.320933  501823 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.102845758s)
	I1002 08:05:02.322026  501823 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-417078" to be "Ready" ...
	I1002 08:05:02.322278  501823 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.104275022s)
	I1002 08:05:02.322295  501823 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 08:05:02.324067  501823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.309537783s)
	I1002 08:05:02.400012  501823 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 08:05:02.403302  501823 addons.go:514] duration metric: took 2.099074475s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 08:05:02.829194  501823 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-417078" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.723031312Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5e8d4d21-3b13-469f-a95d-b9bcb705c56c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.726632855Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=fa12f52a-8014-41f2-9b1b-9bfb48fe1291 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.726955426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.739652569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.740007231Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/17861a86efa212fe96345a53b5e6af51af4c170164c3d9dfa4692f2b88499c5b/merged/etc/passwd: no such file or directory"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.740112479Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/17861a86efa212fe96345a53b5e6af51af4c170164c3d9dfa4692f2b88499c5b/merged/etc/group: no such file or directory"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.740989697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.761566281Z" level=info msg="Created container db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68: kube-system/storage-provisioner/storage-provisioner" id=fa12f52a-8014-41f2-9b1b-9bfb48fe1291 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.766637524Z" level=info msg="Starting container: db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68" id=8f98a211-6323-47fb-ba3d-9485e8dce959 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:04:33 embed-certs-171347 crio[656]: time="2025-10-02T08:04:33.768965947Z" level=info msg="Started container" PID=1648 containerID=db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68 description=kube-system/storage-provisioner/storage-provisioner id=8f98a211-6323-47fb-ba3d-9485e8dce959 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a597c6b60d14da6ad9420a789a23bf7bc1c6f9075b5d63b9a6cc5f1cf2d8483
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.519519081Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.529762203Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.529799217Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.52981589Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.547191976Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.547230303Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.547257404Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.563274258Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.563513078Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.563596Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.567723013Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.567894756Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.567980156Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.571507688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:04:42 embed-certs-171347 crio[656]: time="2025-10-02T08:04:42.571720916Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	db0b849d228fa       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   3a597c6b60d14       storage-provisioner                          kube-system
	036a049d4d170       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           36 seconds ago       Exited              dashboard-metrics-scraper   2                   7506cea014ff0       dashboard-metrics-scraper-6ffb444bf9-jdwcd   kubernetes-dashboard
	85a6c0112fabf       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   1779415ae36a8       kubernetes-dashboard-855c9754f9-lph8n        kubernetes-dashboard
	baed87e4231c7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   af58480e3e35b       busybox                                      default
	a0a6e0d90ca3f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   08f2093c3ae76       coredns-66bc5c9577-h88d8                     kube-system
	600c1f2a64fc2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   82590faa9bcc7       kube-proxy-jzmxf                             kube-system
	fe1a50b0490de       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   3a597c6b60d14       storage-provisioner                          kube-system
	cef0953b0e3f7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   914961e612d63       kindnet-q6rpr                                kube-system
	6f3ca884c1303       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b713ef44919de       kube-apiserver-embed-certs-171347            kube-system
	a3295c18de5cd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   5917a40776297       kube-controller-manager-embed-certs-171347   kube-system
	19e7d5d7bdca5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   af4fc3032430c       etcd-embed-certs-171347                      kube-system
	85a09c19828ce       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   c8f1e034d89bb       kube-scheduler-embed-certs-171347            kube-system
	
	
	==> coredns [a0a6e0d90ca3f8c60830a35bbff243ab0113bb82f6b41ee268f2abb9cf210599] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42646 - 48543 "HINFO IN 6587318862410380506.3595378422212825610. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03124595s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-171347
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-171347
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=embed-certs-171347
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_02_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:02:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-171347
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:04:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:04:31 +0000   Thu, 02 Oct 2025 08:02:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:04:31 +0000   Thu, 02 Oct 2025 08:02:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:04:31 +0000   Thu, 02 Oct 2025 08:02:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:04:31 +0000   Thu, 02 Oct 2025 08:03:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-171347
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c252fe6c0ca45dba4ee6e57615acf95
	  System UUID:                73993af2-e810-4ff8-b445-81bcd4ff9d18
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-h88d8                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m27s
	  kube-system                 etcd-embed-certs-171347                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m32s
	  kube-system                 kindnet-q6rpr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-embed-certs-171347             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-controller-manager-embed-certs-171347    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-jzmxf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-embed-certs-171347             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jdwcd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lph8n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   Starting                 60s                    kube-proxy       
	  Warning  CgroupV1                 2m45s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m44s (x8 over 2m45s)  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m44s (x8 over 2m45s)  kubelet          Node embed-certs-171347 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m44s (x8 over 2m45s)  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m33s                  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m33s                  kubelet          Node embed-certs-171347 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s                  kubelet          Node embed-certs-171347 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m28s                  node-controller  Node embed-certs-171347 event: Registered Node embed-certs-171347 in Controller
	  Normal   NodeReady                105s                   kubelet          Node embed-certs-171347 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node embed-certs-171347 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node embed-certs-171347 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node embed-certs-171347 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                    node-controller  Node embed-certs-171347 event: Registered Node embed-certs-171347 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:34] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [19e7d5d7bdca5512898a0c121ad4ff851265a3f8cf6c48dddb1e91460e0e5e12] <==
	{"level":"warn","ts":"2025-10-02T08:03:58.310211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.346756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.382232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.410152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.485479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.509353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.533225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.576147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.591695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.637647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.659838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.697265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.726705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.764796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.802952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.822173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.860111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.902278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.936988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:58.971314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.071239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.104876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.137983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.163160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:03:59.219301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54514","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:05:04 up  2:47,  0 user,  load average: 3.55, 3.11, 2.28
	Linux embed-certs-171347 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cef0953b0e3f7851b931442373cc005c869dafa2e5b3791570a189edfeed70be] <==
	I1002 08:04:02.246971       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:04:02.299381       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 08:04:02.299633       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:04:02.299692       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:04:02.299754       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:04:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:04:02.516352       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:04:02.516444       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:04:02.522745       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:04:02.523340       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:04:32.516128       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 08:04:32.517061       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 08:04:32.523559       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 08:04:32.523661       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 08:04:33.823922       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:04:33.823963       1 metrics.go:72] Registering metrics
	I1002 08:04:33.824036       1 controller.go:711] "Syncing nftables rules"
	I1002 08:04:42.519159       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 08:04:42.519225       1 main.go:301] handling current node
	I1002 08:04:52.523226       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 08:04:52.523267       1 main.go:301] handling current node
	I1002 08:05:02.523354       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 08:05:02.523390       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6f3ca884c1303597bf9de27670995129fac9974f29ec3998eefcb79f460f2608] <==
	I1002 08:04:01.176353       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 08:04:01.176727       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 08:04:01.183415       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 08:04:01.183481       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 08:04:01.183639       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 08:04:01.183689       1 policy_source.go:240] refreshing policies
	I1002 08:04:01.190621       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 08:04:01.195701       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:04:01.198910       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 08:04:01.200429       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 08:04:01.201965       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 08:04:01.208450       1 cache.go:39] Caches are synced for autoregister controller
	I1002 08:04:01.210983       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 08:04:01.238417       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:04:01.273029       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 08:04:01.554354       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:04:02.464165       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:04:02.568838       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:04:02.683198       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:04:02.725385       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:04:02.913252       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.139.253"}
	I1002 08:04:02.940978       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.166.183"}
	I1002 08:04:04.860605       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:04:04.909992       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:04:05.021216       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a3295c18de5cd39930de6a29eafc9bfeb208a5f01b6be0d2f865fafae39a8562] <==
	I1002 08:04:04.492172       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 08:04:04.492182       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 08:04:04.495487       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 08:04:04.496899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 08:04:04.496984       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 08:04:04.496996       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 08:04:04.500403       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 08:04:04.500556       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 08:04:04.500640       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 08:04:04.500686       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 08:04:04.500715       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 08:04:04.502668       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 08:04:04.503885       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 08:04:04.503958       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 08:04:04.504035       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 08:04:04.504273       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 08:04:04.504496       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 08:04:04.504622       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 08:04:04.504702       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 08:04:04.504799       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-171347"
	I1002 08:04:04.504753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 08:04:04.504904       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 08:04:04.505680       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 08:04:04.508035       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 08:04:04.509257       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [600c1f2a64fc29e5faea39640a8f0c02a79132bb36624fecd3cf771143b4199e] <==
	I1002 08:04:02.877278       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:04:03.172551       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:04:03.278339       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:04:03.279163       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 08:04:03.279246       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:04:03.373856       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:04:03.373978       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:04:03.378572       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:04:03.378925       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:04:03.379585       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:04:03.381024       1 config.go:200] "Starting service config controller"
	I1002 08:04:03.381125       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:04:03.381227       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:04:03.381261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:04:03.381310       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:04:03.381337       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:04:03.382187       1 config.go:309] "Starting node config controller"
	I1002 08:04:03.382247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:04:03.382277       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:04:03.481557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:04:03.481662       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:04:03.481693       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [85a09c19828ce281864f49326c73b8b58d618d6e28f38bb8d34c435302ffd907] <==
	I1002 08:04:03.000385       1 serving.go:386] Generated self-signed cert in-memory
	I1002 08:04:03.835354       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 08:04:03.835382       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:04:03.840697       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 08:04:03.840741       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 08:04:03.840773       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:04:03.840781       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:04:03.840794       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:04:03.840809       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:04:03.843586       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:04:03.843670       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:04:03.941424       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:04:03.941488       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 08:04:03.941578       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: E1002 08:04:05.107504     780 status_manager.go:1018] "Failed to get status for pod" err="pods \"dashboard-metrics-scraper-6ffb444bf9-jdwcd\" is forbidden: User \"system:node:embed-certs-171347\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'embed-certs-171347' and this object" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd"
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: I1002 08:04:05.250974     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/732dd77c-3a1a-4f39-be41-fee9623149cf-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lph8n\" (UID: \"732dd77c-3a1a-4f39-be41-fee9623149cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lph8n"
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: I1002 08:04:05.251925     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql85w\" (UniqueName: \"kubernetes.io/projected/b89d4134-6e85-430a-81a6-3a5ba9870788-kube-api-access-ql85w\") pod \"dashboard-metrics-scraper-6ffb444bf9-jdwcd\" (UID: \"b89d4134-6e85-430a-81a6-3a5ba9870788\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd"
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: I1002 08:04:05.251985     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b89d4134-6e85-430a-81a6-3a5ba9870788-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-jdwcd\" (UID: \"b89d4134-6e85-430a-81a6-3a5ba9870788\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd"
	Oct 02 08:04:05 embed-certs-171347 kubelet[780]: I1002 08:04:05.252068     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xztrr\" (UniqueName: \"kubernetes.io/projected/732dd77c-3a1a-4f39-be41-fee9623149cf-kube-api-access-xztrr\") pod \"kubernetes-dashboard-855c9754f9-lph8n\" (UID: \"732dd77c-3a1a-4f39-be41-fee9623149cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lph8n"
	Oct 02 08:04:06 embed-certs-171347 kubelet[780]: W1002 08:04:06.366968     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/411784c5c3f57a4a6657b24ab5b1d1c990428243cb8e8479f142a34b68763faa/crio-1779415ae36a8b81471d578ee6c5c347250601070058debc2f5fe01b8e442532 WatchSource:0}: Error finding container 1779415ae36a8b81471d578ee6c5c347250601070058debc2f5fe01b8e442532: Status 404 returned error can't find the container with id 1779415ae36a8b81471d578ee6c5c347250601070058debc2f5fe01b8e442532
	Oct 02 08:04:11 embed-certs-171347 kubelet[780]: I1002 08:04:11.640710     780 scope.go:117] "RemoveContainer" containerID="8716230077e27fe5f2dea32eef238683d1245cf13dc6fc85c8b8566c7a8da18e"
	Oct 02 08:04:12 embed-certs-171347 kubelet[780]: I1002 08:04:12.640746     780 scope.go:117] "RemoveContainer" containerID="8716230077e27fe5f2dea32eef238683d1245cf13dc6fc85c8b8566c7a8da18e"
	Oct 02 08:04:12 embed-certs-171347 kubelet[780]: I1002 08:04:12.641676     780 scope.go:117] "RemoveContainer" containerID="19445b88f10214b2add34015a72f1734834ae9f0a1098e8d87de0b581f6b7e30"
	Oct 02 08:04:12 embed-certs-171347 kubelet[780]: E1002 08:04:12.641970     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:16 embed-certs-171347 kubelet[780]: I1002 08:04:16.299751     780 scope.go:117] "RemoveContainer" containerID="19445b88f10214b2add34015a72f1734834ae9f0a1098e8d87de0b581f6b7e30"
	Oct 02 08:04:16 embed-certs-171347 kubelet[780]: E1002 08:04:16.299996     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: I1002 08:04:27.505355     780 scope.go:117] "RemoveContainer" containerID="19445b88f10214b2add34015a72f1734834ae9f0a1098e8d87de0b581f6b7e30"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: I1002 08:04:27.698218     780 scope.go:117] "RemoveContainer" containerID="19445b88f10214b2add34015a72f1734834ae9f0a1098e8d87de0b581f6b7e30"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: I1002 08:04:27.698413     780 scope.go:117] "RemoveContainer" containerID="036a049d4d170fc6f94d917d0b70ffeec2e78e29355403e588a6a8c388ef33f1"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: E1002 08:04:27.698581     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:27 embed-certs-171347 kubelet[780]: I1002 08:04:27.740109     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lph8n" podStartSLOduration=12.097069007 podStartE2EDuration="22.740091531s" podCreationTimestamp="2025-10-02 08:04:05 +0000 UTC" firstStartedPulling="2025-10-02 08:04:06.374726286 +0000 UTC m=+12.081809218" lastFinishedPulling="2025-10-02 08:04:17.01774881 +0000 UTC m=+22.724831742" observedRunningTime="2025-10-02 08:04:17.693239437 +0000 UTC m=+23.400322369" watchObservedRunningTime="2025-10-02 08:04:27.740091531 +0000 UTC m=+33.447174463"
	Oct 02 08:04:33 embed-certs-171347 kubelet[780]: I1002 08:04:33.721121     780 scope.go:117] "RemoveContainer" containerID="fe1a50b0490de1a057bb2439be07b143442b3be835d66e0e05add86a488991b7"
	Oct 02 08:04:36 embed-certs-171347 kubelet[780]: I1002 08:04:36.299537     780 scope.go:117] "RemoveContainer" containerID="036a049d4d170fc6f94d917d0b70ffeec2e78e29355403e588a6a8c388ef33f1"
	Oct 02 08:04:36 embed-certs-171347 kubelet[780]: E1002 08:04:36.299729     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:47 embed-certs-171347 kubelet[780]: I1002 08:04:47.507648     780 scope.go:117] "RemoveContainer" containerID="036a049d4d170fc6f94d917d0b70ffeec2e78e29355403e588a6a8c388ef33f1"
	Oct 02 08:04:47 embed-certs-171347 kubelet[780]: E1002 08:04:47.507974     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jdwcd_kubernetes-dashboard(b89d4134-6e85-430a-81a6-3a5ba9870788)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jdwcd" podUID="b89d4134-6e85-430a-81a6-3a5ba9870788"
	Oct 02 08:04:58 embed-certs-171347 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:04:58 embed-certs-171347 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:04:58 embed-certs-171347 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [85a6c0112fabfd159eed64bcdbc0d532333c469b91bb51b8b81cceaa57497dfa] <==
	2025/10/02 08:04:17 Using namespace: kubernetes-dashboard
	2025/10/02 08:04:17 Using in-cluster config to connect to apiserver
	2025/10/02 08:04:17 Using secret token for csrf signing
	2025/10/02 08:04:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 08:04:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 08:04:17 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 08:04:17 Generating JWE encryption key
	2025/10/02 08:04:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 08:04:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 08:04:18 Initializing JWE encryption key from synchronized object
	2025/10/02 08:04:18 Creating in-cluster Sidecar client
	2025/10/02 08:04:18 Serving insecurely on HTTP port: 9090
	2025/10/02 08:04:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:04:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:04:17 Starting overwatch
	
	
	==> storage-provisioner [db0b849d228fac61d4e2d6e7be757539fbc17766bd3b2e4bc18d8917aebdbf68] <==
	W1002 08:04:33.805387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:37.260409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:41.521439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:45.120620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:48.175614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:51.198608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:51.208106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:04:51.208344       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:04:51.209658       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-171347_31d1bd72-87a6-4577-a779-321f920f8894!
	I1002 08:04:51.210561       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5810a4b1-cc04-4b0a-996b-984738abc721", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-171347_31d1bd72-87a6-4577-a779-321f920f8894 became leader
	W1002 08:04:51.217640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:51.223810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:04:51.310447       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-171347_31d1bd72-87a6-4577-a779-321f920f8894!
	W1002 08:04:53.226239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:53.232940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:55.235974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:55.240133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:57.243329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:57.248400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:59.251868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:04:59.269503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:01.272867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:01.285184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:03.288454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:03.296467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe1a50b0490de1a057bb2439be07b143442b3be835d66e0e05add86a488991b7] <==
	I1002 08:04:02.802470       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 08:04:32.810763       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-171347 -n embed-certs-171347
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-171347 -n embed-certs-171347: exit status 2 (386.215939ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-171347 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-009374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-009374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (280.451133ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:05:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-009374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-009374
helpers_test.go:243: (dbg) docker inspect newest-cni-009374:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5",
	        "Created": "2025-10-02T08:05:13.541866609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 506136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:05:13.604914699Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/hosts",
	        "LogPath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5-json.log",
	        "Name": "/newest-cni-009374",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-009374:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-009374",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5",
	                "LowerDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-009374",
	                "Source": "/var/lib/docker/volumes/newest-cni-009374/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-009374",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-009374",
	                "name.minikube.sigs.k8s.io": "newest-cni-009374",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6140d1223aa56423a597224d7d5f381bd399f5404da7bc614d76cb1f09d42ea",
	            "SandboxKey": "/var/run/docker/netns/c6140d1223aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-009374": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:6c:e1:f5:4b:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76416bed3e9b57e23ee4e18e21c895059d8b16740e350a7d0407898e1cd7fb9e",
	                    "EndpointID": "d0c6036ad13a7d3961952959066dc4542c4d21c244a583afe88206ee05a7a49c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-009374",
	                        "ccc6360467e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009374 -n newest-cni-009374
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-009374 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-009374 logs -n 25: (1.155440281s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ delete  │ -p old-k8s-version-356986                                                                                                                                                                                                                     │ old-k8s-version-356986       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:02 UTC │
	│ delete  │ -p cert-expiration-759246                                                                                                                                                                                                                     │ cert-expiration-759246       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │                     │
	│ stop    │ -p no-preload-604182 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p no-preload-604182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ stop    │ -p embed-certs-171347 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-171347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ image   │ no-preload-604182 image list --format=json                                                                                                                                                                                                    │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p disable-driver-mounts-466206                                                                                                                                                                                                               │ disable-driver-mounts-466206 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:05 UTC │
	│ image   │ embed-certs-171347 image list --format=json                                                                                                                                                                                                   │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p embed-certs-171347 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable metrics-server -p newest-cni-009374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:05:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:05:08.022338  505743 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:05:08.022476  505743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:05:08.022486  505743 out.go:374] Setting ErrFile to fd 2...
	I1002 08:05:08.022491  505743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:05:08.022906  505743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:05:08.023470  505743 out.go:368] Setting JSON to false
	I1002 08:05:08.024745  505743 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10059,"bootTime":1759382249,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:05:08.024827  505743 start.go:140] virtualization:  
	I1002 08:05:08.029079  505743 out.go:179] * [newest-cni-009374] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:05:08.033560  505743 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:05:08.033730  505743 notify.go:220] Checking for updates...
	I1002 08:05:08.037374  505743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:05:08.040657  505743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:05:08.044026  505743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:05:08.047169  505743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:05:08.050211  505743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:05:08.053815  505743 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:05:08.053955  505743 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:05:08.076702  505743 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:05:08.076830  505743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:05:08.144683  505743 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:05:08.135374991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:05:08.144798  505743 docker.go:318] overlay module found
	I1002 08:05:08.148207  505743 out.go:179] * Using the docker driver based on user configuration
	I1002 08:05:08.151263  505743 start.go:304] selected driver: docker
	I1002 08:05:08.151285  505743 start.go:924] validating driver "docker" against <nil>
	I1002 08:05:08.151301  505743 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:05:08.152047  505743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:05:08.213097  505743 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:05:08.203956252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:05:08.213252  505743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1002 08:05:08.213283  505743 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1002 08:05:08.213525  505743 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 08:05:08.216513  505743 out.go:179] * Using Docker driver with root privileges
	I1002 08:05:08.219532  505743 cni.go:84] Creating CNI manager for ""
	I1002 08:05:08.219608  505743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:05:08.219626  505743 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 08:05:08.219714  505743 start.go:348] cluster config:
	{Name:newest-cni-009374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:05:08.222768  505743 out.go:179] * Starting "newest-cni-009374" primary control-plane node in "newest-cni-009374" cluster
	I1002 08:05:08.225585  505743 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:05:08.228623  505743 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:05:08.231387  505743 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:05:08.231437  505743 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:05:08.231453  505743 cache.go:58] Caching tarball of preloaded images
	I1002 08:05:08.231488  505743 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:05:08.231545  505743 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:05:08.231556  505743 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:05:08.231661  505743 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/config.json ...
	I1002 08:05:08.231678  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/config.json: {Name:mk3b00f84ec9e01170e0b040f918d03f7f43d587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:08.249861  505743 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:05:08.249881  505743 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:05:08.249908  505743 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:05:08.249931  505743 start.go:360] acquireMachinesLock for newest-cni-009374: {Name:mkc4d59aea6378cca25c0d5a33fa5c014f2edd31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:05:08.250045  505743 start.go:364] duration metric: took 99.078µs to acquireMachinesLock for "newest-cni-009374"
	I1002 08:05:08.250095  505743 start.go:93] Provisioning new machine with config: &{Name:newest-cni-009374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:05:08.250182  505743 start.go:125] createHost starting for "" (driver="docker")
	W1002 08:05:09.325701  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:11.825504  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:08.253494  505743 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 08:05:08.253741  505743 start.go:159] libmachine.API.Create for "newest-cni-009374" (driver="docker")
	I1002 08:05:08.253792  505743 client.go:168] LocalClient.Create starting
	I1002 08:05:08.253879  505743 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 08:05:08.253919  505743 main.go:141] libmachine: Decoding PEM data...
	I1002 08:05:08.253935  505743 main.go:141] libmachine: Parsing certificate...
	I1002 08:05:08.253993  505743 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 08:05:08.254014  505743 main.go:141] libmachine: Decoding PEM data...
	I1002 08:05:08.254028  505743 main.go:141] libmachine: Parsing certificate...
	I1002 08:05:08.254417  505743 cli_runner.go:164] Run: docker network inspect newest-cni-009374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 08:05:08.270633  505743 cli_runner.go:211] docker network inspect newest-cni-009374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 08:05:08.270749  505743 network_create.go:284] running [docker network inspect newest-cni-009374] to gather additional debugging logs...
	I1002 08:05:08.270798  505743 cli_runner.go:164] Run: docker network inspect newest-cni-009374
	W1002 08:05:08.288673  505743 cli_runner.go:211] docker network inspect newest-cni-009374 returned with exit code 1
	I1002 08:05:08.288707  505743 network_create.go:287] error running [docker network inspect newest-cni-009374]: docker network inspect newest-cni-009374: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-009374 not found
	I1002 08:05:08.288719  505743 network_create.go:289] output of [docker network inspect newest-cni-009374]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-009374 not found
	
	** /stderr **
	I1002 08:05:08.288825  505743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:05:08.305408  505743 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 08:05:08.305783  505743 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 08:05:08.305935  505743 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 08:05:08.306269  505743 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d1780ea11813 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:84:d7:de:73:b2} reservation:<nil>}
	I1002 08:05:08.306679  505743 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5f90}
	I1002 08:05:08.306700  505743 network_create.go:124] attempt to create docker network newest-cni-009374 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 08:05:08.306768  505743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-009374 newest-cni-009374
	I1002 08:05:08.366588  505743 network_create.go:108] docker network newest-cni-009374 192.168.85.0/24 created
	I1002 08:05:08.366623  505743 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-009374" container
	I1002 08:05:08.366717  505743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 08:05:08.384356  505743 cli_runner.go:164] Run: docker volume create newest-cni-009374 --label name.minikube.sigs.k8s.io=newest-cni-009374 --label created_by.minikube.sigs.k8s.io=true
	I1002 08:05:08.403911  505743 oci.go:103] Successfully created a docker volume newest-cni-009374
	I1002 08:05:08.404007  505743 cli_runner.go:164] Run: docker run --rm --name newest-cni-009374-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-009374 --entrypoint /usr/bin/test -v newest-cni-009374:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 08:05:08.975801  505743 oci.go:107] Successfully prepared a docker volume newest-cni-009374
	I1002 08:05:08.975856  505743 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:05:08.975878  505743 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 08:05:08.975948  505743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-009374:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1002 08:05:13.828110  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:16.325173  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:13.458680  505743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-009374:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.482689234s)
	I1002 08:05:13.458711  505743 kic.go:203] duration metric: took 4.482830946s to extract preloaded images to volume ...
	W1002 08:05:13.458855  505743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 08:05:13.458983  505743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 08:05:13.514518  505743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-009374 --name newest-cni-009374 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-009374 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-009374 --network newest-cni-009374 --ip 192.168.85.2 --volume newest-cni-009374:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 08:05:13.841231  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Running}}
	I1002 08:05:13.862914  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:13.888408  505743 cli_runner.go:164] Run: docker exec newest-cni-009374 stat /var/lib/dpkg/alternatives/iptables
	I1002 08:05:13.956283  505743 oci.go:144] the created container "newest-cni-009374" has a running status.
	I1002 08:05:13.956327  505743 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa...
	I1002 08:05:14.316897  505743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 08:05:14.357173  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:14.392058  505743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 08:05:14.392077  505743 kic_runner.go:114] Args: [docker exec --privileged newest-cni-009374 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 08:05:14.448957  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:14.467707  505743 machine.go:93] provisionDockerMachine start ...
	I1002 08:05:14.467816  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:14.485889  505743 main.go:141] libmachine: Using SSH client type: native
	I1002 08:05:14.486247  505743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1002 08:05:14.486258  505743 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:05:14.486834  505743 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39936->127.0.0.1:33433: read: connection reset by peer
	I1002 08:05:17.626698  505743 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-009374
	
	I1002 08:05:17.626722  505743 ubuntu.go:182] provisioning hostname "newest-cni-009374"
	I1002 08:05:17.626787  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:17.647697  505743 main.go:141] libmachine: Using SSH client type: native
	I1002 08:05:17.648015  505743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1002 08:05:17.648033  505743 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-009374 && echo "newest-cni-009374" | sudo tee /etc/hostname
	I1002 08:05:17.788866  505743 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-009374
	
	I1002 08:05:17.788955  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:17.806775  505743 main.go:141] libmachine: Using SSH client type: native
	I1002 08:05:17.807144  505743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1002 08:05:17.807164  505743 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-009374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-009374/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-009374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:05:17.939726  505743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:05:17.939747  505743 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:05:17.939774  505743 ubuntu.go:190] setting up certificates
	I1002 08:05:17.939784  505743 provision.go:84] configureAuth start
	I1002 08:05:17.939841  505743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009374
	I1002 08:05:17.957531  505743 provision.go:143] copyHostCerts
	I1002 08:05:17.957595  505743 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:05:17.957604  505743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:05:17.957687  505743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:05:17.957787  505743 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:05:17.957792  505743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:05:17.957818  505743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:05:17.957869  505743 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:05:17.957874  505743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:05:17.957895  505743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:05:17.958037  505743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.newest-cni-009374 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-009374]
	I1002 08:05:18.723709  505743 provision.go:177] copyRemoteCerts
	I1002 08:05:18.723780  505743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:05:18.723830  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:18.744220  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:18.847147  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:05:18.864966  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 08:05:18.883591  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 08:05:18.901923  505743 provision.go:87] duration metric: took 962.125906ms to configureAuth
	I1002 08:05:18.901954  505743 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:05:18.902179  505743 config.go:182] Loaded profile config "newest-cni-009374": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:05:18.902288  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:18.920389  505743 main.go:141] libmachine: Using SSH client type: native
	I1002 08:05:18.920706  505743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1002 08:05:18.920728  505743 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:05:19.173677  505743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:05:19.173709  505743 machine.go:96] duration metric: took 4.705981163s to provisionDockerMachine
	I1002 08:05:19.173719  505743 client.go:171] duration metric: took 10.919915289s to LocalClient.Create
	I1002 08:05:19.173733  505743 start.go:167] duration metric: took 10.919994018s to libmachine.API.Create "newest-cni-009374"
	I1002 08:05:19.173741  505743 start.go:293] postStartSetup for "newest-cni-009374" (driver="docker")
	I1002 08:05:19.173768  505743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:05:19.173844  505743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:05:19.173890  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:19.193028  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:19.291626  505743 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:05:19.295280  505743 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:05:19.295313  505743 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:05:19.295325  505743 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:05:19.295382  505743 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:05:19.295464  505743 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:05:19.295575  505743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:05:19.303473  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:05:19.321128  505743 start.go:296] duration metric: took 147.370977ms for postStartSetup
	I1002 08:05:19.321548  505743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009374
	I1002 08:05:19.341054  505743 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/config.json ...
	I1002 08:05:19.341351  505743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:05:19.341417  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:19.360528  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:19.456893  505743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:05:19.462566  505743 start.go:128] duration metric: took 11.212369181s to createHost
	I1002 08:05:19.462590  505743 start.go:83] releasing machines lock for "newest-cni-009374", held for 11.212535919s
	I1002 08:05:19.462665  505743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009374
	I1002 08:05:19.479266  505743 ssh_runner.go:195] Run: cat /version.json
	I1002 08:05:19.479304  505743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:05:19.479335  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:19.479376  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:19.499787  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:19.517617  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:19.607024  505743 ssh_runner.go:195] Run: systemctl --version
	I1002 08:05:19.706873  505743 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:05:19.744472  505743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:05:19.748677  505743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:05:19.748793  505743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:05:19.778661  505743 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 08:05:19.778690  505743 start.go:495] detecting cgroup driver to use...
	I1002 08:05:19.778724  505743 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:05:19.778777  505743 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:05:19.797235  505743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:05:19.810783  505743 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:05:19.810872  505743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:05:19.837467  505743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:05:19.858168  505743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:05:20.005518  505743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:05:20.138575  505743 docker.go:234] disabling docker service ...
	I1002 08:05:20.138678  505743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:05:20.164135  505743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:05:20.181704  505743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:05:20.302700  505743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:05:20.416747  505743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:05:20.430215  505743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:05:20.444616  505743 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:05:20.444685  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.453675  505743 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:05:20.453748  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.463428  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.473015  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.482922  505743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:05:20.497688  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.515172  505743 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.537887  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.547776  505743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:05:20.555685  505743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:05:20.563485  505743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:05:20.690318  505743 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:05:20.835845  505743 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:05:20.835932  505743 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:05:20.840630  505743 start.go:563] Will wait 60s for crictl version
	I1002 08:05:20.840715  505743 ssh_runner.go:195] Run: which crictl
	I1002 08:05:20.844905  505743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:05:20.870316  505743 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:05:20.870405  505743 ssh_runner.go:195] Run: crio --version
	I1002 08:05:20.902956  505743 ssh_runner.go:195] Run: crio --version
	I1002 08:05:20.942068  505743 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 08:05:20.945111  505743 cli_runner.go:164] Run: docker network inspect newest-cni-009374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:05:20.960322  505743 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 08:05:20.964597  505743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:05:20.978087  505743 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1002 08:05:18.326049  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:20.326484  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:22.826846  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:20.980895  505743 kubeadm.go:883] updating cluster {Name:newest-cni-009374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:05:20.981047  505743 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:05:20.981138  505743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:05:21.021326  505743 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:05:21.021353  505743 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:05:21.021412  505743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:05:21.054008  505743 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:05:21.054041  505743 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:05:21.054051  505743 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 08:05:21.054159  505743 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-009374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:05:21.054247  505743 ssh_runner.go:195] Run: crio config
	I1002 08:05:21.125212  505743 cni.go:84] Creating CNI manager for ""
	I1002 08:05:21.125241  505743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:05:21.125255  505743 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 08:05:21.125288  505743 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-009374 NodeName:newest-cni-009374 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:05:21.125440  505743 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-009374"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:05:21.125538  505743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:05:21.134696  505743 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:05:21.134777  505743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:05:21.142936  505743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 08:05:21.156693  505743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:05:21.170396  505743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1002 08:05:21.184284  505743 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:05:21.188433  505743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:05:21.199075  505743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:05:21.325114  505743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:05:21.342110  505743 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374 for IP: 192.168.85.2
	I1002 08:05:21.342184  505743 certs.go:195] generating shared ca certs ...
	I1002 08:05:21.342215  505743 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.342405  505743 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:05:21.342483  505743 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:05:21.342519  505743 certs.go:257] generating profile certs ...
	I1002 08:05:21.342604  505743 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.key
	I1002 08:05:21.342644  505743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.crt with IP's: []
	I1002 08:05:21.639036  505743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.crt ...
	I1002 08:05:21.639067  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.crt: {Name:mkc8bd3fbe68762ffa8e8c2092bda774e13be482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.639263  505743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.key ...
	I1002 08:05:21.639280  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.key: {Name:mk0d1f7e3e55c1b0b7f029711cf8307a1963c5b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.639382  505743 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key.5f9bb80c
	I1002 08:05:21.639401  505743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt.5f9bb80c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 08:05:21.915402  505743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt.5f9bb80c ...
	I1002 08:05:21.915434  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt.5f9bb80c: {Name:mkf10ff2584ff29346ada4f0cb552775bf05892c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.915616  505743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key.5f9bb80c ...
	I1002 08:05:21.915631  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key.5f9bb80c: {Name:mk5996d7a84cbc156faaa104ec441acb6fe6aede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.915709  505743 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt.5f9bb80c -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt
	I1002 08:05:21.915810  505743 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key.5f9bb80c -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key
	I1002 08:05:21.915873  505743 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.key
	I1002 08:05:21.915896  505743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.crt with IP's: []
	I1002 08:05:22.839211  505743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.crt ...
	I1002 08:05:22.839246  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.crt: {Name:mkc3911cd99cf2913815fc32eea94248f3d6f8ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:22.839430  505743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.key ...
	I1002 08:05:22.839446  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.key: {Name:mkba834fec2a23895bbd4f78a8f69fbac09d680d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:22.839640  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:05:22.839683  505743 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:05:22.839698  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:05:22.839721  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:05:22.839747  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:05:22.839773  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:05:22.839820  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:05:22.840387  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:05:22.861613  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:05:22.883271  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:05:22.911788  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:05:22.936138  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 08:05:22.957760  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:05:22.982284  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:05:23.002867  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	W1002 08:05:25.324891  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:27.328290  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:23.027029  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:05:23.048353  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:05:23.069852  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:05:23.089334  505743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:05:23.102788  505743 ssh_runner.go:195] Run: openssl version
	I1002 08:05:23.109746  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:05:23.118425  505743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:05:23.122311  505743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:05:23.122438  505743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:05:23.164465  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:05:23.172996  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:05:23.181444  505743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:05:23.185332  505743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:05:23.185419  505743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:05:23.227474  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:05:23.235947  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:05:23.244731  505743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:05:23.248403  505743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:05:23.248492  505743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:05:23.289391  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:05:23.298040  505743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:05:23.301713  505743 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 08:05:23.301795  505743 kubeadm.go:400] StartCluster: {Name:newest-cni-009374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:05:23.301877  505743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:05:23.301955  505743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:05:23.333810  505743 cri.go:89] found id: ""
	I1002 08:05:23.333894  505743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:05:23.342252  505743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 08:05:23.350169  505743 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 08:05:23.350258  505743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 08:05:23.358221  505743 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 08:05:23.358244  505743 kubeadm.go:157] found existing configuration files:
	
	I1002 08:05:23.358295  505743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 08:05:23.366575  505743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 08:05:23.366649  505743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 08:05:23.374636  505743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 08:05:23.382842  505743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 08:05:23.382912  505743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 08:05:23.390975  505743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 08:05:23.399074  505743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 08:05:23.399233  505743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 08:05:23.406978  505743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 08:05:23.415366  505743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 08:05:23.415440  505743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 08:05:23.423242  505743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 08:05:23.491830  505743 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 08:05:23.492079  505743 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 08:05:23.559968  505743 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1002 08:05:29.827341  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:32.326114  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:34.326176  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:36.825669  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:38.825860  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:41.343599  501823 node_ready.go:49] node "default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:41.343634  501823 node_ready.go:38] duration metric: took 39.021590865s for node "default-k8s-diff-port-417078" to be "Ready" ...
	I1002 08:05:41.343649  501823 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:05:41.343709  501823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:05:41.365581  501823 api_server.go:72] duration metric: took 41.061846434s to wait for apiserver process to appear ...
	I1002 08:05:41.365610  501823 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:05:41.365630  501823 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 08:05:41.374889  501823 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1002 08:05:41.376990  501823 api_server.go:141] control plane version: v1.34.1
	I1002 08:05:41.377021  501823 api_server.go:131] duration metric: took 11.403551ms to wait for apiserver health ...
	I1002 08:05:41.377032  501823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:05:41.381555  501823 system_pods.go:59] 8 kube-system pods found
	I1002 08:05:41.381596  501823 system_pods.go:61] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:05:41.381604  501823 system_pods.go:61] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:41.381621  501823 system_pods.go:61] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:41.381627  501823 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:41.381632  501823 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:41.381639  501823 system_pods.go:61] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:41.381644  501823 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:41.381656  501823 system_pods.go:61] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:05:41.381662  501823 system_pods.go:74] duration metric: took 4.625766ms to wait for pod list to return data ...
	I1002 08:05:41.381686  501823 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:05:41.384536  501823 default_sa.go:45] found service account: "default"
	I1002 08:05:41.384566  501823 default_sa.go:55] duration metric: took 2.869984ms for default service account to be created ...
	I1002 08:05:41.384588  501823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:05:41.388279  501823 system_pods.go:86] 8 kube-system pods found
	I1002 08:05:41.388326  501823 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:05:41.388333  501823 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:41.388340  501823 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:41.388345  501823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:41.388349  501823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:41.388354  501823 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:41.388358  501823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:41.388372  501823 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:05:41.388403  501823 retry.go:31] will retry after 289.517334ms: missing components: kube-dns
	I1002 08:05:41.682783  501823 system_pods.go:86] 8 kube-system pods found
	I1002 08:05:41.682832  501823 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:05:41.682841  501823 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:41.682848  501823 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:41.682853  501823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:41.682859  501823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:41.682867  501823 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:41.682871  501823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:41.682877  501823 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:05:41.682899  501823 retry.go:31] will retry after 324.416042ms: missing components: kube-dns
	I1002 08:05:42.021963  501823 system_pods.go:86] 8 kube-system pods found
	I1002 08:05:42.022003  501823 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:05:42.022011  501823 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:42.022018  501823 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:42.022023  501823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:42.022028  501823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:42.022032  501823 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:42.022037  501823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:42.022043  501823 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:05:42.022092  501823 retry.go:31] will retry after 363.460211ms: missing components: kube-dns
	I1002 08:05:42.390427  501823 system_pods.go:86] 8 kube-system pods found
	I1002 08:05:42.390504  501823 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Running
	I1002 08:05:42.390533  501823 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:42.390574  501823 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:42.390599  501823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:42.390624  501823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:42.390648  501823 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:42.390679  501823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:42.390706  501823 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Running
	I1002 08:05:42.390730  501823 system_pods.go:126] duration metric: took 1.006135523s to wait for k8s-apps to be running ...
	I1002 08:05:42.390753  501823 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:05:42.390838  501823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:05:42.405328  501823 system_svc.go:56] duration metric: took 14.566379ms WaitForService to wait for kubelet
	I1002 08:05:42.405407  501823 kubeadm.go:586] duration metric: took 42.101677399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:05:42.405443  501823 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:05:42.409304  501823 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:05:42.409385  501823 node_conditions.go:123] node cpu capacity is 2
	I1002 08:05:42.409413  501823 node_conditions.go:105] duration metric: took 3.949878ms to run NodePressure ...
	I1002 08:05:42.409459  501823 start.go:241] waiting for startup goroutines ...
	I1002 08:05:42.409485  501823 start.go:246] waiting for cluster config update ...
	I1002 08:05:42.409513  501823 start.go:255] writing updated cluster config ...
	I1002 08:05:42.409885  501823 ssh_runner.go:195] Run: rm -f paused
	I1002 08:05:42.414926  501823 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:05:42.418640  501823 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cscrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.427726  501823 pod_ready.go:94] pod "coredns-66bc5c9577-cscrn" is "Ready"
	I1002 08:05:42.427751  501823 pod_ready.go:86] duration metric: took 9.081019ms for pod "coredns-66bc5c9577-cscrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.430607  501823 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.439177  501823 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:42.439207  501823 pod_ready.go:86] duration metric: took 8.573938ms for pod "etcd-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.444672  501823 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.454840  501823 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:42.454866  501823 pod_ready.go:86] duration metric: took 10.170293ms for pod "kube-apiserver-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.490995  501823 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.818511  501823 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:42.818540  501823 pod_ready.go:86] duration metric: took 327.461807ms for pod "kube-controller-manager-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:44.068293  505743 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 08:05:44.068459  505743 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 08:05:44.068605  505743 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 08:05:44.068686  505743 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 08:05:44.068740  505743 kubeadm.go:318] OS: Linux
	I1002 08:05:44.068812  505743 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 08:05:44.068867  505743 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 08:05:44.068921  505743 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 08:05:44.068975  505743 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 08:05:44.069038  505743 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 08:05:44.069089  505743 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 08:05:44.069139  505743 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 08:05:44.069205  505743 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 08:05:44.069275  505743 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 08:05:44.069368  505743 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 08:05:44.069490  505743 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 08:05:44.069591  505743 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 08:05:44.069660  505743 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 08:05:44.072822  505743 out.go:252]   - Generating certificates and keys ...
	I1002 08:05:44.072941  505743 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 08:05:44.073015  505743 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 08:05:44.073090  505743 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 08:05:44.073154  505743 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 08:05:44.073221  505743 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 08:05:44.073282  505743 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 08:05:44.073343  505743 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 08:05:44.073473  505743 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-009374] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:05:44.073531  505743 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 08:05:44.073656  505743 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-009374] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:05:44.073777  505743 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 08:05:44.073869  505743 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 08:05:44.073942  505743 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 08:05:44.074037  505743 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 08:05:44.074125  505743 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 08:05:44.074206  505743 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 08:05:44.074291  505743 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 08:05:44.074384  505743 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 08:05:44.074476  505743 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 08:05:44.074568  505743 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 08:05:44.074640  505743 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 08:05:44.077622  505743 out.go:252]   - Booting up control plane ...
	I1002 08:05:44.077739  505743 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 08:05:44.077841  505743 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 08:05:44.077916  505743 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 08:05:44.078028  505743 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 08:05:44.078139  505743 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 08:05:44.078267  505743 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 08:05:44.078364  505743 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 08:05:44.078411  505743 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 08:05:44.078556  505743 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 08:05:44.078680  505743 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 08:05:44.078746  505743 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501717371s
	I1002 08:05:44.078844  505743 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 08:05:44.078931  505743 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 08:05:44.079028  505743 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 08:05:44.079152  505743 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 08:05:44.079234  505743 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.115324492s
	I1002 08:05:44.079303  505743 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.85277951s
	I1002 08:05:44.079377  505743 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002546178s
	I1002 08:05:44.079497  505743 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 08:05:44.079631  505743 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 08:05:44.079719  505743 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 08:05:44.079920  505743 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-009374 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 08:05:44.079981  505743 kubeadm.go:318] [bootstrap-token] Using token: tre844.97ebvftte9n7mk7q
	I1002 08:05:44.083064  505743 out.go:252]   - Configuring RBAC rules ...
	I1002 08:05:44.083221  505743 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 08:05:44.083310  505743 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 08:05:44.083460  505743 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 08:05:44.083605  505743 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 08:05:44.083731  505743 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 08:05:44.083824  505743 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 08:05:44.083957  505743 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 08:05:44.084010  505743 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 08:05:44.084063  505743 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 08:05:44.084070  505743 kubeadm.go:318] 
	I1002 08:05:44.084147  505743 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 08:05:44.084158  505743 kubeadm.go:318] 
	I1002 08:05:44.084238  505743 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 08:05:44.084246  505743 kubeadm.go:318] 
	I1002 08:05:44.084273  505743 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 08:05:44.084339  505743 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 08:05:44.084399  505743 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 08:05:44.084412  505743 kubeadm.go:318] 
	I1002 08:05:44.084469  505743 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 08:05:44.084477  505743 kubeadm.go:318] 
	I1002 08:05:44.084527  505743 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 08:05:44.084535  505743 kubeadm.go:318] 
	I1002 08:05:44.084591  505743 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 08:05:44.084672  505743 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 08:05:44.084747  505743 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 08:05:44.084755  505743 kubeadm.go:318] 
	I1002 08:05:44.084844  505743 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 08:05:44.084929  505743 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 08:05:44.084937  505743 kubeadm.go:318] 
	I1002 08:05:44.085026  505743 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tre844.97ebvftte9n7mk7q \
	I1002 08:05:44.085137  505743 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 08:05:44.085164  505743 kubeadm.go:318] 	--control-plane 
	I1002 08:05:44.085171  505743 kubeadm.go:318] 
	I1002 08:05:44.085261  505743 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 08:05:44.085269  505743 kubeadm.go:318] 
	I1002 08:05:44.085355  505743 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tre844.97ebvftte9n7mk7q \
	I1002 08:05:44.085479  505743 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 08:05:44.085491  505743 cni.go:84] Creating CNI manager for ""
	I1002 08:05:44.085498  505743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:05:44.088521  505743 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 08:05:43.020395  501823 pod_ready.go:83] waiting for pod "kube-proxy-g6hc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:43.420222  501823 pod_ready.go:94] pod "kube-proxy-g6hc4" is "Ready"
	I1002 08:05:43.420263  501823 pod_ready.go:86] duration metric: took 399.783731ms for pod "kube-proxy-g6hc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:43.620589  501823 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:44.019358  501823 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:44.019384  501823 pod_ready.go:86] duration metric: took 398.769822ms for pod "kube-scheduler-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:44.019399  501823 pod_ready.go:40] duration metric: took 1.604437667s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:05:44.107126  501823 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:05:44.110302  501823 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417078" cluster and "default" namespace by default
	I1002 08:05:44.091480  505743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 08:05:44.096760  505743 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 08:05:44.096783  505743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 08:05:44.120326  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 08:05:44.708936  505743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 08:05:44.709082  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:44.709162  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-009374 minikube.k8s.io/updated_at=2025_10_02T08_05_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=newest-cni-009374 minikube.k8s.io/primary=true
	I1002 08:05:44.917353  505743 ops.go:34] apiserver oom_adj: -16
	I1002 08:05:44.917468  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:45.417763  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:45.917804  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:46.417700  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:46.918017  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:47.418059  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:47.917591  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:48.418153  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:48.554171  505743 kubeadm.go:1113] duration metric: took 3.845142219s to wait for elevateKubeSystemPrivileges
	I1002 08:05:48.554197  505743 kubeadm.go:402] duration metric: took 25.252431982s to StartCluster
	I1002 08:05:48.554214  505743 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:48.554274  505743 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:05:48.555268  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:48.555504  505743 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:05:48.555588  505743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 08:05:48.555852  505743 config.go:182] Loaded profile config "newest-cni-009374": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:05:48.555891  505743 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:05:48.555949  505743 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-009374"
	I1002 08:05:48.555963  505743 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-009374"
	I1002 08:05:48.555983  505743 host.go:66] Checking if "newest-cni-009374" exists ...
	I1002 08:05:48.556680  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:48.557035  505743 addons.go:69] Setting default-storageclass=true in profile "newest-cni-009374"
	I1002 08:05:48.557053  505743 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-009374"
	I1002 08:05:48.557344  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:48.560778  505743 out.go:179] * Verifying Kubernetes components...
	I1002 08:05:48.567205  505743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:05:48.592829  505743 addons.go:238] Setting addon default-storageclass=true in "newest-cni-009374"
	I1002 08:05:48.592870  505743 host.go:66] Checking if "newest-cni-009374" exists ...
	I1002 08:05:48.593302  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:48.603239  505743 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:05:48.613074  505743 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:05:48.613100  505743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:05:48.613177  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:48.637067  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:48.643229  505743 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:05:48.643251  505743 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:05:48.643325  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:48.675134  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:48.992074  505743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:05:49.070131  505743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 08:05:49.070340  505743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:05:49.125005  505743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:05:49.836295  505743 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 08:05:49.839577  505743 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:05:49.839669  505743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:05:49.859202  505743 api_server.go:72] duration metric: took 1.303671575s to wait for apiserver process to appear ...
	I1002 08:05:49.859278  505743 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:05:49.859310  505743 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 08:05:49.888485  505743 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 08:05:49.892848  505743 api_server.go:141] control plane version: v1.34.1
	I1002 08:05:49.892875  505743 api_server.go:131] duration metric: took 33.574415ms to wait for apiserver health ...
	I1002 08:05:49.892884  505743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:05:49.895481  505743 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 08:05:49.897198  505743 system_pods.go:59] 9 kube-system pods found
	I1002 08:05:49.897232  505743 system_pods.go:61] "coredns-66bc5c9577-p2j8l" [a810de8d-b66f-404e-8b14-911266df5272] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:05:49.897241  505743 system_pods.go:61] "coredns-66bc5c9577-vfgvv" [2ee2a4e0-4f16-4a78-b0ab-8ec1b8e98193] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:05:49.897249  505743 system_pods.go:61] "etcd-newest-cni-009374" [cabdca96-8777-4057-9e06-1781a4bca780] Running
	I1002 08:05:49.897253  505743 system_pods.go:61] "kindnet-f45p7" [c9cf92b3-8ccb-4487-b783-29df2834d679] Running
	I1002 08:05:49.897266  505743 system_pods.go:61] "kube-apiserver-newest-cni-009374" [986bf8bd-e659-4a96-9fa6-55f2e838b6dd] Running
	I1002 08:05:49.897274  505743 system_pods.go:61] "kube-controller-manager-newest-cni-009374" [b41b9bc3-59aa-4596-9d21-207dfe86cf1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:05:49.897278  505743 system_pods.go:61] "kube-proxy-qsv24" [db609c90-476d-450d-a43d-0600b893f712] Running
	I1002 08:05:49.897284  505743 system_pods.go:61] "kube-scheduler-newest-cni-009374" [5e2e0730-38ef-4779-a6a6-0fe4a374388f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:05:49.897288  505743 system_pods.go:61] "storage-provisioner" [187ddc8e-cf7d-471a-b913-c757e198b82a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:05:49.897295  505743 system_pods.go:74] duration metric: took 4.405719ms to wait for pod list to return data ...
	I1002 08:05:49.897303  505743 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:05:49.899191  505743 addons.go:514] duration metric: took 1.343283329s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 08:05:49.900809  505743 default_sa.go:45] found service account: "default"
	I1002 08:05:49.900836  505743 default_sa.go:55] duration metric: took 3.526268ms for default service account to be created ...
	I1002 08:05:49.900849  505743 kubeadm.go:586] duration metric: took 1.345321847s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 08:05:49.900865  505743 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:05:49.907440  505743 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:05:49.907472  505743 node_conditions.go:123] node cpu capacity is 2
	I1002 08:05:49.907486  505743 node_conditions.go:105] duration metric: took 6.61529ms to run NodePressure ...
	I1002 08:05:49.907498  505743 start.go:241] waiting for startup goroutines ...
	I1002 08:05:50.340372  505743 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-009374" context rescaled to 1 replicas
	I1002 08:05:50.340415  505743 start.go:246] waiting for cluster config update ...
	I1002 08:05:50.340428  505743 start.go:255] writing updated cluster config ...
	I1002 08:05:50.340733  505743 ssh_runner.go:195] Run: rm -f paused
	I1002 08:05:50.398894  505743 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:05:50.403481  505743 out.go:179] * Done! kubectl is now configured to use "newest-cni-009374" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.704995111Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.711060619Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=04ccce42-3b17-4044-8f53-a7e62b935fbb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.718432511Z" level=info msg="Running pod sandbox: kube-system/kindnet-f45p7/POD" id=c3bacd4c-6ba5-4c13-a5a6-ccb02d8ffcdb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.718749683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.732246357Z" level=info msg="Ran pod sandbox d527839808be7374cae691c63707eb13a135396ac90c024daa3350f335eac5e3 with infra container: kube-system/kube-proxy-qsv24/POD" id=04ccce42-3b17-4044-8f53-a7e62b935fbb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.739248104Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bf9470ce-71ed-4f10-b8b2-ae5b92a4c1dc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.742537562Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=488b0868-7b7d-41da-b6b7-774f37e58de4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.746534415Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c3bacd4c-6ba5-4c13-a5a6-ccb02d8ffcdb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.74894325Z" level=info msg="Creating container: kube-system/kube-proxy-qsv24/kube-proxy" id=d9ff07d4-ba39-443f-94de-382b4bed70d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.751356121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.773082049Z" level=info msg="Ran pod sandbox 0eeba20d6dfe2023312f5d920bc98fc3a5bb114a7a32842e2773a6bc593c158e with infra container: kube-system/kindnet-f45p7/POD" id=c3bacd4c-6ba5-4c13-a5a6-ccb02d8ffcdb name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.776454675Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f4a65d12-de8a-4097-a3ec-0f63062cee07 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.785348633Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4fac1dfc-dbec-4480-9d20-f0419676bb04 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.79066119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.791577277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.803636408Z" level=info msg="Creating container: kube-system/kindnet-f45p7/kindnet-cni" id=1b9882b1-05f8-48c0-ab1c-7d3857c10fe7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.803927586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.832966361Z" level=info msg="Created container 1a60b056f40e21dc1d8365a4a34a544b42b3d8fb074a599a3068edf6c6df773a: kube-system/kube-proxy-qsv24/kube-proxy" id=d9ff07d4-ba39-443f-94de-382b4bed70d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.847521761Z" level=info msg="Starting container: 1a60b056f40e21dc1d8365a4a34a544b42b3d8fb074a599a3068edf6c6df773a" id=47e2b933-0502-4eff-8e8c-6b022bc7a9e1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.867305771Z" level=info msg="Started container" PID=1466 containerID=1a60b056f40e21dc1d8365a4a34a544b42b3d8fb074a599a3068edf6c6df773a description=kube-system/kube-proxy-qsv24/kube-proxy id=47e2b933-0502-4eff-8e8c-6b022bc7a9e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d527839808be7374cae691c63707eb13a135396ac90c024daa3350f335eac5e3
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.869971281Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.871899644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.9192882Z" level=info msg="Created container fe5a704f71b12d9379b119da4d8e1010bb042eb69c972a0923e2e45d5fc4835b: kube-system/kindnet-f45p7/kindnet-cni" id=1b9882b1-05f8-48c0-ab1c-7d3857c10fe7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.920279611Z" level=info msg="Starting container: fe5a704f71b12d9379b119da4d8e1010bb042eb69c972a0923e2e45d5fc4835b" id=1d180266-67ee-4ef8-bf73-84d35954b394 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:05:48 newest-cni-009374 crio[839]: time="2025-10-02T08:05:48.93159997Z" level=info msg="Started container" PID=1484 containerID=fe5a704f71b12d9379b119da4d8e1010bb042eb69c972a0923e2e45d5fc4835b description=kube-system/kindnet-f45p7/kindnet-cni id=1d180266-67ee-4ef8-bf73-84d35954b394 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0eeba20d6dfe2023312f5d920bc98fc3a5bb114a7a32842e2773a6bc593c158e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fe5a704f71b12       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   0eeba20d6dfe2       kindnet-f45p7                               kube-system
	1a60b056f40e2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   d527839808be7       kube-proxy-qsv24                            kube-system
	bea7c8dbd8fc0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   22a6fcfa591e1       kube-controller-manager-newest-cni-009374   kube-system
	cc8b680a2c48f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   a65e6baba42f9       kube-scheduler-newest-cni-009374            kube-system
	3b56439018209       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   679b9a31ff811       etcd-newest-cni-009374                      kube-system
	4f8144dc8b8b6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   bc80eb72d0bc4       kube-apiserver-newest-cni-009374            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-009374
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-009374
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=newest-cni-009374
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_05_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:05:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-009374
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:05:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:05:43 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:05:43 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:05:43 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 08:05:43 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-009374
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e5063530f7440e8939f5f6a1aa7b314
	  System UUID:                ee2e55db-e4b5-4d38-b86d-d81369f0c72d
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-009374                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-f45p7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-009374             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-009374    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-qsv24                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-009374             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 2s    kube-proxy       
	  Normal   Starting                 8s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s    kubelet          Node newest-cni-009374 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-009374 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s    kubelet          Node newest-cni-009374 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s    node-controller  Node newest-cni-009374 event: Registered Node newest-cni-009374 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:05] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3b564390182094e58e7b5534a343330441e6c53a393455f93c69fe94517032c8] <==
	{"level":"warn","ts":"2025-10-02T08:05:39.056329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.070366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.104131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.123955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.138376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.162648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.174066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.193989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.217461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.232358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.250613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.268167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.297097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.309143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.326315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.344050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.366853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.397953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.417331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.431302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.448342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.485162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.510001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.515139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:05:39.619185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57372","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:05:51 up  2:48,  0 user,  load average: 3.16, 3.04, 2.29
	Linux newest-cni-009374 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fe5a704f71b12d9379b119da4d8e1010bb042eb69c972a0923e2e45d5fc4835b] <==
	I1002 08:05:49.108723       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:05:49.108949       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 08:05:49.109063       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:05:49.109083       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:05:49.109096       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:05:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:05:49.312123       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:05:49.312180       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:05:49.312189       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:05:49.312504       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [4f8144dc8b8b66150f5f0874cd1e29dbb08a239d858930597ea39521d835aae6] <==
	I1002 08:05:40.518687       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1002 08:05:40.519188       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 08:05:40.540447       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:05:40.540580       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 08:05:40.548890       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:05:40.553351       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 08:05:40.555725       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:05:40.555862       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 08:05:41.218922       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 08:05:41.224240       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 08:05:41.224267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:05:42.250027       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:05:42.342764       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:05:42.464872       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:05:42.534763       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 08:05:42.548850       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1002 08:05:42.550173       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:05:42.557399       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:05:43.501276       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:05:43.534494       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 08:05:43.550869       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 08:05:48.365809       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1002 08:05:48.472819       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:05:48.480418       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:05:48.596956       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bea7c8dbd8fc055add7f227365cb39bcbff07208061a20a1fd77b425558a8876] <==
	I1002 08:05:47.490028       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 08:05:47.492193       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 08:05:47.507404       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 08:05:47.509709       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 08:05:47.509791       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 08:05:47.509878       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 08:05:47.509965       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-009374"
	I1002 08:05:47.510020       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 08:05:47.510604       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 08:05:47.510810       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:05:47.513276       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 08:05:47.513352       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 08:05:47.515195       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 08:05:47.516569       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 08:05:47.517390       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:05:47.517426       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 08:05:47.517455       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 08:05:47.518113       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 08:05:47.521178       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 08:05:47.521242       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 08:05:47.521264       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 08:05:47.521269       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 08:05:47.521275       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 08:05:47.536346       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-009374" podCIDRs=["10.42.0.0/24"]
	I1002 08:05:47.538639       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [1a60b056f40e21dc1d8365a4a34a544b42b3d8fb074a599a3068edf6c6df773a] <==
	I1002 08:05:49.006244       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:05:49.100718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:05:49.201512       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:05:49.201558       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 08:05:49.201646       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:05:49.243275       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:05:49.243334       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:05:49.275854       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:05:49.276303       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:05:49.276319       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:05:49.290676       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:05:49.299357       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:05:49.299742       1 config.go:200] "Starting service config controller"
	I1002 08:05:49.299750       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:05:49.292035       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:05:49.299766       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:05:49.292736       1 config.go:309] "Starting node config controller"
	I1002 08:05:49.299777       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:05:49.299782       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:05:49.399531       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 08:05:49.400780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:05:49.400858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [cc8b680a2c48f82a04fe2fa65c6fa22e49b9f6974f1aa721a4eda2e09508ae0e] <==
	E1002 08:05:40.496118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 08:05:40.496170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 08:05:40.506393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 08:05:40.506469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 08:05:40.506522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 08:05:40.506657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 08:05:40.506725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 08:05:40.506770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 08:05:40.506880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 08:05:40.506921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 08:05:40.506964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 08:05:40.507169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 08:05:41.362713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 08:05:41.375612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 08:05:41.390600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 08:05:41.394186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 08:05:41.438773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 08:05:41.444356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 08:05:41.465982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 08:05:41.519029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 08:05:41.541876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 08:05:41.618470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 08:05:41.798490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 08:05:41.846204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1002 08:05:44.287941       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: I1002 08:05:44.414670    1324 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: I1002 08:05:44.539926    1324 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: I1002 08:05:44.540145    1324 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-009374"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: I1002 08:05:44.540335    1324 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-009374"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: E1002 08:05:44.567517    1324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-009374\" already exists" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: E1002 08:05:44.567924    1324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-009374\" already exists" pod="kube-system/kube-apiserver-newest-cni-009374"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: E1002 08:05:44.568167    1324 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-009374\" already exists" pod="kube-system/kube-scheduler-newest-cni-009374"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: I1002 08:05:44.599362    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-009374" podStartSLOduration=2.59934184 podStartE2EDuration="2.59934184s" podCreationTimestamp="2025-10-02 08:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:44.585532604 +0000 UTC m=+1.283689178" watchObservedRunningTime="2025-10-02 08:05:44.59934184 +0000 UTC m=+1.297498406"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: I1002 08:05:44.621731    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-009374" podStartSLOduration=2.621712771 podStartE2EDuration="2.621712771s" podCreationTimestamp="2025-10-02 08:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:44.6005115 +0000 UTC m=+1.298668074" watchObservedRunningTime="2025-10-02 08:05:44.621712771 +0000 UTC m=+1.319869336"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: I1002 08:05:44.621839    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-009374" podStartSLOduration=1.621833412 podStartE2EDuration="1.621833412s" podCreationTimestamp="2025-10-02 08:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:44.621444723 +0000 UTC m=+1.319601305" watchObservedRunningTime="2025-10-02 08:05:44.621833412 +0000 UTC m=+1.319989994"
	Oct 02 08:05:44 newest-cni-009374 kubelet[1324]: I1002 08:05:44.669983    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-009374" podStartSLOduration=1.669962972 podStartE2EDuration="1.669962972s" podCreationTimestamp="2025-10-02 08:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:44.648151825 +0000 UTC m=+1.346308407" watchObservedRunningTime="2025-10-02 08:05:44.669962972 +0000 UTC m=+1.368119546"
	Oct 02 08:05:47 newest-cni-009374 kubelet[1324]: I1002 08:05:47.566128    1324 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 02 08:05:47 newest-cni-009374 kubelet[1324]: I1002 08:05:47.567259    1324 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: I1002 08:05:48.467888    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db609c90-476d-450d-a43d-0600b893f712-xtables-lock\") pod \"kube-proxy-qsv24\" (UID: \"db609c90-476d-450d-a43d-0600b893f712\") " pod="kube-system/kube-proxy-qsv24"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: I1002 08:05:48.467946    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9cf92b3-8ccb-4487-b783-29df2834d679-lib-modules\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: I1002 08:05:48.467987    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db609c90-476d-450d-a43d-0600b893f712-kube-proxy\") pod \"kube-proxy-qsv24\" (UID: \"db609c90-476d-450d-a43d-0600b893f712\") " pod="kube-system/kube-proxy-qsv24"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: I1002 08:05:48.468005    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db609c90-476d-450d-a43d-0600b893f712-lib-modules\") pod \"kube-proxy-qsv24\" (UID: \"db609c90-476d-450d-a43d-0600b893f712\") " pod="kube-system/kube-proxy-qsv24"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: I1002 08:05:48.468024    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8psm2\" (UniqueName: \"kubernetes.io/projected/db609c90-476d-450d-a43d-0600b893f712-kube-api-access-8psm2\") pod \"kube-proxy-qsv24\" (UID: \"db609c90-476d-450d-a43d-0600b893f712\") " pod="kube-system/kube-proxy-qsv24"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: I1002 08:05:48.468061    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42x5c\" (UniqueName: \"kubernetes.io/projected/c9cf92b3-8ccb-4487-b783-29df2834d679-kube-api-access-42x5c\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: I1002 08:05:48.468083    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c9cf92b3-8ccb-4487-b783-29df2834d679-cni-cfg\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: I1002 08:05:48.468103    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9cf92b3-8ccb-4487-b783-29df2834d679-xtables-lock\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: I1002 08:05:48.671815    1324 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 08:05:48 newest-cni-009374 kubelet[1324]: W1002 08:05:48.723416    1324 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/crio-d527839808be7374cae691c63707eb13a135396ac90c024daa3350f335eac5e3 WatchSource:0}: Error finding container d527839808be7374cae691c63707eb13a135396ac90c024daa3350f335eac5e3: Status 404 returned error can't find the container with id d527839808be7374cae691c63707eb13a135396ac90c024daa3350f335eac5e3
	Oct 02 08:05:49 newest-cni-009374 kubelet[1324]: I1002 08:05:49.605030    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-f45p7" podStartSLOduration=1.605009269 podStartE2EDuration="1.605009269s" podCreationTimestamp="2025-10-02 08:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:49.574422755 +0000 UTC m=+6.272579320" watchObservedRunningTime="2025-10-02 08:05:49.605009269 +0000 UTC m=+6.303165835"
	Oct 02 08:05:51 newest-cni-009374 kubelet[1324]: I1002 08:05:51.984026    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qsv24" podStartSLOduration=3.984008975 podStartE2EDuration="3.984008975s" podCreationTimestamp="2025-10-02 08:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:49.605335278 +0000 UTC m=+6.303491852" watchObservedRunningTime="2025-10-02 08:05:51.984008975 +0000 UTC m=+8.682165541"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-009374 -n newest-cni-009374
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-009374 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-p2j8l storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-009374 describe pod coredns-66bc5c9577-p2j8l storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-009374 describe pod coredns-66bc5c9577-p2j8l storage-provisioner: exit status 1 (105.767837ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-p2j8l" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-009374 describe pod coredns-66bc5c9577-p2j8l storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-417078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-417078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (346.259126ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:05:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-417078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-417078 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-417078 describe deploy/metrics-server -n kube-system: exit status 1 (129.428256ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-417078 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-417078
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-417078:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba",
	        "Created": "2025-10-02T08:04:28.399453084Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 502223,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:04:28.462308474Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/hosts",
	        "LogPath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba-json.log",
	        "Name": "/default-k8s-diff-port-417078",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-417078:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-417078",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba",
	                "LowerDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-417078",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-417078/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-417078",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-417078",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-417078",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22a66bdff126f3118b38297385a567fd5c4e8afd61085392c48cd51f63a7646b",
	            "SandboxKey": "/var/run/docker/netns/22a66bdff126",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-417078": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:3c:9d:87:b5:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d1780ea11813add7386f7a8e327ace3f3a59d3c8ad3cf5599ed166ee793fe5a6",
	                    "EndpointID": "0f0ff43f440e45783dac5d517cf3a49be43b175c847f65292920a6070eb1bb88",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-417078",
	                        "9b8a295e3342"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-417078 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-417078 logs -n 25: (1.711481589s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:02 UTC │
	│ delete  │ -p cert-expiration-759246                                                                                                                                                                                                                     │ cert-expiration-759246       │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:01 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:01 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable metrics-server -p no-preload-604182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │                     │
	│ stop    │ -p no-preload-604182 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:02 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p no-preload-604182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ stop    │ -p embed-certs-171347 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-171347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ image   │ no-preload-604182 image list --format=json                                                                                                                                                                                                    │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p disable-driver-mounts-466206                                                                                                                                                                                                               │ disable-driver-mounts-466206 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:05 UTC │
	│ image   │ embed-certs-171347 image list --format=json                                                                                                                                                                                                   │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p embed-certs-171347 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable metrics-server -p newest-cni-009374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ stop    │ -p newest-cni-009374 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:05:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:05:08.022338  505743 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:05:08.022476  505743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:05:08.022486  505743 out.go:374] Setting ErrFile to fd 2...
	I1002 08:05:08.022491  505743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:05:08.022906  505743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:05:08.023470  505743 out.go:368] Setting JSON to false
	I1002 08:05:08.024745  505743 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10059,"bootTime":1759382249,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:05:08.024827  505743 start.go:140] virtualization:  
	I1002 08:05:08.029079  505743 out.go:179] * [newest-cni-009374] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:05:08.033560  505743 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:05:08.033730  505743 notify.go:220] Checking for updates...
	I1002 08:05:08.037374  505743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:05:08.040657  505743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:05:08.044026  505743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:05:08.047169  505743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:05:08.050211  505743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:05:08.053815  505743 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:05:08.053955  505743 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:05:08.076702  505743 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:05:08.076830  505743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:05:08.144683  505743 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:05:08.135374991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:05:08.144798  505743 docker.go:318] overlay module found
	I1002 08:05:08.148207  505743 out.go:179] * Using the docker driver based on user configuration
	I1002 08:05:08.151263  505743 start.go:304] selected driver: docker
	I1002 08:05:08.151285  505743 start.go:924] validating driver "docker" against <nil>
	I1002 08:05:08.151301  505743 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:05:08.152047  505743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:05:08.213097  505743 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:05:08.203956252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:05:08.213252  505743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1002 08:05:08.213283  505743 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1002 08:05:08.213525  505743 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 08:05:08.216513  505743 out.go:179] * Using Docker driver with root privileges
	I1002 08:05:08.219532  505743 cni.go:84] Creating CNI manager for ""
	I1002 08:05:08.219608  505743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:05:08.219626  505743 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 08:05:08.219714  505743 start.go:348] cluster config:
	{Name:newest-cni-009374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:05:08.222768  505743 out.go:179] * Starting "newest-cni-009374" primary control-plane node in "newest-cni-009374" cluster
	I1002 08:05:08.225585  505743 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:05:08.228623  505743 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:05:08.231387  505743 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:05:08.231437  505743 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:05:08.231453  505743 cache.go:58] Caching tarball of preloaded images
	I1002 08:05:08.231488  505743 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:05:08.231545  505743 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:05:08.231556  505743 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:05:08.231661  505743 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/config.json ...
	I1002 08:05:08.231678  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/config.json: {Name:mk3b00f84ec9e01170e0b040f918d03f7f43d587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:08.249861  505743 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:05:08.249881  505743 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:05:08.249908  505743 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:05:08.249931  505743 start.go:360] acquireMachinesLock for newest-cni-009374: {Name:mkc4d59aea6378cca25c0d5a33fa5c014f2edd31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:05:08.250045  505743 start.go:364] duration metric: took 99.078µs to acquireMachinesLock for "newest-cni-009374"
	I1002 08:05:08.250095  505743 start.go:93] Provisioning new machine with config: &{Name:newest-cni-009374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:05:08.250182  505743 start.go:125] createHost starting for "" (driver="docker")
	W1002 08:05:09.325701  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:11.825504  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:08.253494  505743 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 08:05:08.253741  505743 start.go:159] libmachine.API.Create for "newest-cni-009374" (driver="docker")
	I1002 08:05:08.253792  505743 client.go:168] LocalClient.Create starting
	I1002 08:05:08.253879  505743 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 08:05:08.253919  505743 main.go:141] libmachine: Decoding PEM data...
	I1002 08:05:08.253935  505743 main.go:141] libmachine: Parsing certificate...
	I1002 08:05:08.253993  505743 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 08:05:08.254014  505743 main.go:141] libmachine: Decoding PEM data...
	I1002 08:05:08.254028  505743 main.go:141] libmachine: Parsing certificate...
	I1002 08:05:08.254417  505743 cli_runner.go:164] Run: docker network inspect newest-cni-009374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 08:05:08.270633  505743 cli_runner.go:211] docker network inspect newest-cni-009374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 08:05:08.270749  505743 network_create.go:284] running [docker network inspect newest-cni-009374] to gather additional debugging logs...
	I1002 08:05:08.270798  505743 cli_runner.go:164] Run: docker network inspect newest-cni-009374
	W1002 08:05:08.288673  505743 cli_runner.go:211] docker network inspect newest-cni-009374 returned with exit code 1
	I1002 08:05:08.288707  505743 network_create.go:287] error running [docker network inspect newest-cni-009374]: docker network inspect newest-cni-009374: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-009374 not found
	I1002 08:05:08.288719  505743 network_create.go:289] output of [docker network inspect newest-cni-009374]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-009374 not found
	
	** /stderr **
	I1002 08:05:08.288825  505743 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:05:08.305408  505743 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 08:05:08.305783  505743 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 08:05:08.305935  505743 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 08:05:08.306269  505743 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d1780ea11813 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:84:d7:de:73:b2} reservation:<nil>}
	I1002 08:05:08.306679  505743 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5f90}
	I1002 08:05:08.306700  505743 network_create.go:124] attempt to create docker network newest-cni-009374 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 08:05:08.306768  505743 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-009374 newest-cni-009374
	I1002 08:05:08.366588  505743 network_create.go:108] docker network newest-cni-009374 192.168.85.0/24 created
	I1002 08:05:08.366623  505743 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-009374" container
	I1002 08:05:08.366717  505743 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 08:05:08.384356  505743 cli_runner.go:164] Run: docker volume create newest-cni-009374 --label name.minikube.sigs.k8s.io=newest-cni-009374 --label created_by.minikube.sigs.k8s.io=true
	I1002 08:05:08.403911  505743 oci.go:103] Successfully created a docker volume newest-cni-009374
	I1002 08:05:08.404007  505743 cli_runner.go:164] Run: docker run --rm --name newest-cni-009374-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-009374 --entrypoint /usr/bin/test -v newest-cni-009374:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 08:05:08.975801  505743 oci.go:107] Successfully prepared a docker volume newest-cni-009374
	I1002 08:05:08.975856  505743 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:05:08.975878  505743 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 08:05:08.975948  505743 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-009374:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	W1002 08:05:13.828110  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:16.325173  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:13.458680  505743 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-009374:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.482689234s)
	I1002 08:05:13.458711  505743 kic.go:203] duration metric: took 4.482830946s to extract preloaded images to volume ...
	W1002 08:05:13.458855  505743 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 08:05:13.458983  505743 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 08:05:13.514518  505743 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-009374 --name newest-cni-009374 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-009374 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-009374 --network newest-cni-009374 --ip 192.168.85.2 --volume newest-cni-009374:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 08:05:13.841231  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Running}}
	I1002 08:05:13.862914  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:13.888408  505743 cli_runner.go:164] Run: docker exec newest-cni-009374 stat /var/lib/dpkg/alternatives/iptables
	I1002 08:05:13.956283  505743 oci.go:144] the created container "newest-cni-009374" has a running status.
	I1002 08:05:13.956327  505743 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa...
	I1002 08:05:14.316897  505743 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 08:05:14.357173  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:14.392058  505743 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 08:05:14.392077  505743 kic_runner.go:114] Args: [docker exec --privileged newest-cni-009374 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 08:05:14.448957  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:14.467707  505743 machine.go:93] provisionDockerMachine start ...
	I1002 08:05:14.467816  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:14.485889  505743 main.go:141] libmachine: Using SSH client type: native
	I1002 08:05:14.486247  505743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1002 08:05:14.486258  505743 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:05:14.486834  505743 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39936->127.0.0.1:33433: read: connection reset by peer
	I1002 08:05:17.626698  505743 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-009374
	
	I1002 08:05:17.626722  505743 ubuntu.go:182] provisioning hostname "newest-cni-009374"
	I1002 08:05:17.626787  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:17.647697  505743 main.go:141] libmachine: Using SSH client type: native
	I1002 08:05:17.648015  505743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1002 08:05:17.648033  505743 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-009374 && echo "newest-cni-009374" | sudo tee /etc/hostname
	I1002 08:05:17.788866  505743 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-009374
	
	I1002 08:05:17.788955  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:17.806775  505743 main.go:141] libmachine: Using SSH client type: native
	I1002 08:05:17.807144  505743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1002 08:05:17.807164  505743 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-009374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-009374/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-009374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:05:17.939726  505743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:05:17.939747  505743 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:05:17.939774  505743 ubuntu.go:190] setting up certificates
	I1002 08:05:17.939784  505743 provision.go:84] configureAuth start
	I1002 08:05:17.939841  505743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009374
	I1002 08:05:17.957531  505743 provision.go:143] copyHostCerts
	I1002 08:05:17.957595  505743 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:05:17.957604  505743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:05:17.957687  505743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:05:17.957787  505743 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:05:17.957792  505743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:05:17.957818  505743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:05:17.957869  505743 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:05:17.957874  505743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:05:17.957895  505743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:05:17.958037  505743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.newest-cni-009374 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-009374]
	I1002 08:05:18.723709  505743 provision.go:177] copyRemoteCerts
	I1002 08:05:18.723780  505743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:05:18.723830  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:18.744220  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:18.847147  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:05:18.864966  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 08:05:18.883591  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 08:05:18.901923  505743 provision.go:87] duration metric: took 962.125906ms to configureAuth
	I1002 08:05:18.901954  505743 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:05:18.902179  505743 config.go:182] Loaded profile config "newest-cni-009374": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:05:18.902288  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:18.920389  505743 main.go:141] libmachine: Using SSH client type: native
	I1002 08:05:18.920706  505743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1002 08:05:18.920728  505743 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:05:19.173677  505743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:05:19.173709  505743 machine.go:96] duration metric: took 4.705981163s to provisionDockerMachine
	I1002 08:05:19.173719  505743 client.go:171] duration metric: took 10.919915289s to LocalClient.Create
	I1002 08:05:19.173733  505743 start.go:167] duration metric: took 10.919994018s to libmachine.API.Create "newest-cni-009374"
	I1002 08:05:19.173741  505743 start.go:293] postStartSetup for "newest-cni-009374" (driver="docker")
	I1002 08:05:19.173768  505743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:05:19.173844  505743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:05:19.173890  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:19.193028  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:19.291626  505743 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:05:19.295280  505743 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:05:19.295313  505743 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:05:19.295325  505743 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:05:19.295382  505743 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:05:19.295464  505743 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:05:19.295575  505743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:05:19.303473  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:05:19.321128  505743 start.go:296] duration metric: took 147.370977ms for postStartSetup
	I1002 08:05:19.321548  505743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009374
	I1002 08:05:19.341054  505743 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/config.json ...
	I1002 08:05:19.341351  505743 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:05:19.341417  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:19.360528  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:19.456893  505743 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:05:19.462566  505743 start.go:128] duration metric: took 11.212369181s to createHost
	I1002 08:05:19.462590  505743 start.go:83] releasing machines lock for "newest-cni-009374", held for 11.212535919s
	I1002 08:05:19.462665  505743 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-009374
	I1002 08:05:19.479266  505743 ssh_runner.go:195] Run: cat /version.json
	I1002 08:05:19.479304  505743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:05:19.479335  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:19.479376  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:19.499787  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:19.517617  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:19.607024  505743 ssh_runner.go:195] Run: systemctl --version
	I1002 08:05:19.706873  505743 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:05:19.744472  505743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:05:19.748677  505743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:05:19.748793  505743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:05:19.778661  505743 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 08:05:19.778690  505743 start.go:495] detecting cgroup driver to use...
	I1002 08:05:19.778724  505743 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:05:19.778777  505743 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:05:19.797235  505743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:05:19.810783  505743 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:05:19.810872  505743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:05:19.837467  505743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:05:19.858168  505743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:05:20.005518  505743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:05:20.138575  505743 docker.go:234] disabling docker service ...
	I1002 08:05:20.138678  505743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:05:20.164135  505743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:05:20.181704  505743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:05:20.302700  505743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:05:20.416747  505743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:05:20.430215  505743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:05:20.444616  505743 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:05:20.444685  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.453675  505743 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:05:20.453748  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.463428  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.473015  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.482922  505743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:05:20.497688  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.515172  505743 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.537887  505743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:05:20.547776  505743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:05:20.555685  505743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:05:20.563485  505743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:05:20.690318  505743 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:05:20.835845  505743 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:05:20.835932  505743 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:05:20.840630  505743 start.go:563] Will wait 60s for crictl version
	I1002 08:05:20.840715  505743 ssh_runner.go:195] Run: which crictl
	I1002 08:05:20.844905  505743 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:05:20.870316  505743 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:05:20.870405  505743 ssh_runner.go:195] Run: crio --version
	I1002 08:05:20.902956  505743 ssh_runner.go:195] Run: crio --version
	I1002 08:05:20.942068  505743 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 08:05:20.945111  505743 cli_runner.go:164] Run: docker network inspect newest-cni-009374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:05:20.960322  505743 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 08:05:20.964597  505743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:05:20.978087  505743 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1002 08:05:18.326049  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:20.326484  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:22.826846  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:20.980895  505743 kubeadm.go:883] updating cluster {Name:newest-cni-009374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:05:20.981047  505743 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:05:20.981138  505743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:05:21.021326  505743 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:05:21.021353  505743 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:05:21.021412  505743 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:05:21.054008  505743 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:05:21.054041  505743 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:05:21.054051  505743 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 08:05:21.054159  505743 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-009374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:05:21.054247  505743 ssh_runner.go:195] Run: crio config
	I1002 08:05:21.125212  505743 cni.go:84] Creating CNI manager for ""
	I1002 08:05:21.125241  505743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:05:21.125255  505743 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 08:05:21.125288  505743 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-009374 NodeName:newest-cni-009374 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:05:21.125440  505743 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-009374"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:05:21.125538  505743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:05:21.134696  505743 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:05:21.134777  505743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:05:21.142936  505743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 08:05:21.156693  505743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:05:21.170396  505743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1002 08:05:21.184284  505743 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:05:21.188433  505743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:05:21.199075  505743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:05:21.325114  505743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:05:21.342110  505743 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374 for IP: 192.168.85.2
	I1002 08:05:21.342184  505743 certs.go:195] generating shared ca certs ...
	I1002 08:05:21.342215  505743 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.342405  505743 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:05:21.342483  505743 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:05:21.342519  505743 certs.go:257] generating profile certs ...
	I1002 08:05:21.342604  505743 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.key
	I1002 08:05:21.342644  505743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.crt with IP's: []
	I1002 08:05:21.639036  505743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.crt ...
	I1002 08:05:21.639067  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.crt: {Name:mkc8bd3fbe68762ffa8e8c2092bda774e13be482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.639263  505743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.key ...
	I1002 08:05:21.639280  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/client.key: {Name:mk0d1f7e3e55c1b0b7f029711cf8307a1963c5b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.639382  505743 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key.5f9bb80c
	I1002 08:05:21.639401  505743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt.5f9bb80c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 08:05:21.915402  505743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt.5f9bb80c ...
	I1002 08:05:21.915434  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt.5f9bb80c: {Name:mkf10ff2584ff29346ada4f0cb552775bf05892c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.915616  505743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key.5f9bb80c ...
	I1002 08:05:21.915631  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key.5f9bb80c: {Name:mk5996d7a84cbc156faaa104ec441acb6fe6aede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:21.915709  505743 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt.5f9bb80c -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt
	I1002 08:05:21.915810  505743 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key.5f9bb80c -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key
	I1002 08:05:21.915873  505743 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.key
	I1002 08:05:21.915896  505743 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.crt with IP's: []
	I1002 08:05:22.839211  505743 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.crt ...
	I1002 08:05:22.839246  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.crt: {Name:mkc3911cd99cf2913815fc32eea94248f3d6f8ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:22.839430  505743 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.key ...
	I1002 08:05:22.839446  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.key: {Name:mkba834fec2a23895bbd4f78a8f69fbac09d680d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:22.839640  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:05:22.839683  505743 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:05:22.839698  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:05:22.839721  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:05:22.839747  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:05:22.839773  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:05:22.839820  505743 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:05:22.840387  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:05:22.861613  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:05:22.883271  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:05:22.911788  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:05:22.936138  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 08:05:22.957760  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:05:22.982284  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:05:23.002867  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/newest-cni-009374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	W1002 08:05:25.324891  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:27.328290  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:23.027029  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:05:23.048353  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:05:23.069852  505743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:05:23.089334  505743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:05:23.102788  505743 ssh_runner.go:195] Run: openssl version
	I1002 08:05:23.109746  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:05:23.118425  505743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:05:23.122311  505743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:05:23.122438  505743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:05:23.164465  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:05:23.172996  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:05:23.181444  505743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:05:23.185332  505743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:05:23.185419  505743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:05:23.227474  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:05:23.235947  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:05:23.244731  505743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:05:23.248403  505743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:05:23.248492  505743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:05:23.289391  505743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:05:23.298040  505743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:05:23.301713  505743 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 08:05:23.301795  505743 kubeadm.go:400] StartCluster: {Name:newest-cni-009374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-009374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:05:23.301877  505743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:05:23.301955  505743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:05:23.333810  505743 cri.go:89] found id: ""
	I1002 08:05:23.333894  505743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:05:23.342252  505743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 08:05:23.350169  505743 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 08:05:23.350258  505743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 08:05:23.358221  505743 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 08:05:23.358244  505743 kubeadm.go:157] found existing configuration files:
	
	I1002 08:05:23.358295  505743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 08:05:23.366575  505743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 08:05:23.366649  505743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 08:05:23.374636  505743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 08:05:23.382842  505743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 08:05:23.382912  505743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 08:05:23.390975  505743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 08:05:23.399074  505743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 08:05:23.399233  505743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 08:05:23.406978  505743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 08:05:23.415366  505743 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 08:05:23.415440  505743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 08:05:23.423242  505743 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 08:05:23.491830  505743 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 08:05:23.492079  505743 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 08:05:23.559968  505743 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1002 08:05:29.827341  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:32.326114  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:34.326176  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:36.825669  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	W1002 08:05:38.825860  501823 node_ready.go:57] node "default-k8s-diff-port-417078" has "Ready":"False" status (will retry)
	I1002 08:05:41.343599  501823 node_ready.go:49] node "default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:41.343634  501823 node_ready.go:38] duration metric: took 39.021590865s for node "default-k8s-diff-port-417078" to be "Ready" ...
	I1002 08:05:41.343649  501823 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:05:41.343709  501823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:05:41.365581  501823 api_server.go:72] duration metric: took 41.061846434s to wait for apiserver process to appear ...
	I1002 08:05:41.365610  501823 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:05:41.365630  501823 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 08:05:41.374889  501823 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1002 08:05:41.376990  501823 api_server.go:141] control plane version: v1.34.1
	I1002 08:05:41.377021  501823 api_server.go:131] duration metric: took 11.403551ms to wait for apiserver health ...
	I1002 08:05:41.377032  501823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:05:41.381555  501823 system_pods.go:59] 8 kube-system pods found
	I1002 08:05:41.381596  501823 system_pods.go:61] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:05:41.381604  501823 system_pods.go:61] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:41.381621  501823 system_pods.go:61] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:41.381627  501823 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:41.381632  501823 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:41.381639  501823 system_pods.go:61] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:41.381644  501823 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:41.381656  501823 system_pods.go:61] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:05:41.381662  501823 system_pods.go:74] duration metric: took 4.625766ms to wait for pod list to return data ...
	I1002 08:05:41.381686  501823 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:05:41.384536  501823 default_sa.go:45] found service account: "default"
	I1002 08:05:41.384566  501823 default_sa.go:55] duration metric: took 2.869984ms for default service account to be created ...
	I1002 08:05:41.384588  501823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:05:41.388279  501823 system_pods.go:86] 8 kube-system pods found
	I1002 08:05:41.388326  501823 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:05:41.388333  501823 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:41.388340  501823 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:41.388345  501823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:41.388349  501823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:41.388354  501823 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:41.388358  501823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:41.388372  501823 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:05:41.388403  501823 retry.go:31] will retry after 289.517334ms: missing components: kube-dns
	I1002 08:05:41.682783  501823 system_pods.go:86] 8 kube-system pods found
	I1002 08:05:41.682832  501823 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:05:41.682841  501823 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:41.682848  501823 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:41.682853  501823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:41.682859  501823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:41.682867  501823 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:41.682871  501823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:41.682877  501823 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:05:41.682899  501823 retry.go:31] will retry after 324.416042ms: missing components: kube-dns
	I1002 08:05:42.021963  501823 system_pods.go:86] 8 kube-system pods found
	I1002 08:05:42.022003  501823 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:05:42.022011  501823 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:42.022018  501823 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:42.022023  501823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:42.022028  501823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:42.022032  501823 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:42.022037  501823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:42.022043  501823 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 08:05:42.022092  501823 retry.go:31] will retry after 363.460211ms: missing components: kube-dns
	I1002 08:05:42.390427  501823 system_pods.go:86] 8 kube-system pods found
	I1002 08:05:42.390504  501823 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Running
	I1002 08:05:42.390533  501823 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running
	I1002 08:05:42.390574  501823 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:05:42.390599  501823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running
	I1002 08:05:42.390624  501823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running
	I1002 08:05:42.390648  501823 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:05:42.390679  501823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running
	I1002 08:05:42.390706  501823 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Running
	I1002 08:05:42.390730  501823 system_pods.go:126] duration metric: took 1.006135523s to wait for k8s-apps to be running ...
	I1002 08:05:42.390753  501823 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:05:42.390838  501823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:05:42.405328  501823 system_svc.go:56] duration metric: took 14.566379ms WaitForService to wait for kubelet
	I1002 08:05:42.405407  501823 kubeadm.go:586] duration metric: took 42.101677399s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:05:42.405443  501823 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:05:42.409304  501823 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:05:42.409385  501823 node_conditions.go:123] node cpu capacity is 2
	I1002 08:05:42.409413  501823 node_conditions.go:105] duration metric: took 3.949878ms to run NodePressure ...
	I1002 08:05:42.409459  501823 start.go:241] waiting for startup goroutines ...
	I1002 08:05:42.409485  501823 start.go:246] waiting for cluster config update ...
	I1002 08:05:42.409513  501823 start.go:255] writing updated cluster config ...
	I1002 08:05:42.409885  501823 ssh_runner.go:195] Run: rm -f paused
	I1002 08:05:42.414926  501823 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:05:42.418640  501823 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cscrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.427726  501823 pod_ready.go:94] pod "coredns-66bc5c9577-cscrn" is "Ready"
	I1002 08:05:42.427751  501823 pod_ready.go:86] duration metric: took 9.081019ms for pod "coredns-66bc5c9577-cscrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.430607  501823 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.439177  501823 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:42.439207  501823 pod_ready.go:86] duration metric: took 8.573938ms for pod "etcd-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.444672  501823 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.454840  501823 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:42.454866  501823 pod_ready.go:86] duration metric: took 10.170293ms for pod "kube-apiserver-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.490995  501823 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:42.818511  501823 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:42.818540  501823 pod_ready.go:86] duration metric: took 327.461807ms for pod "kube-controller-manager-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:44.068293  505743 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 08:05:44.068459  505743 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 08:05:44.068605  505743 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 08:05:44.068686  505743 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 08:05:44.068740  505743 kubeadm.go:318] OS: Linux
	I1002 08:05:44.068812  505743 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 08:05:44.068867  505743 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 08:05:44.068921  505743 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 08:05:44.068975  505743 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 08:05:44.069038  505743 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 08:05:44.069089  505743 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 08:05:44.069139  505743 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 08:05:44.069205  505743 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 08:05:44.069275  505743 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 08:05:44.069368  505743 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 08:05:44.069490  505743 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 08:05:44.069591  505743 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 08:05:44.069660  505743 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 08:05:44.072822  505743 out.go:252]   - Generating certificates and keys ...
	I1002 08:05:44.072941  505743 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 08:05:44.073015  505743 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 08:05:44.073090  505743 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 08:05:44.073154  505743 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 08:05:44.073221  505743 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 08:05:44.073282  505743 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 08:05:44.073343  505743 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 08:05:44.073473  505743 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-009374] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:05:44.073531  505743 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 08:05:44.073656  505743 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-009374] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:05:44.073777  505743 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 08:05:44.073869  505743 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 08:05:44.073942  505743 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 08:05:44.074037  505743 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 08:05:44.074125  505743 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 08:05:44.074206  505743 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 08:05:44.074291  505743 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 08:05:44.074384  505743 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 08:05:44.074476  505743 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 08:05:44.074568  505743 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 08:05:44.074640  505743 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 08:05:44.077622  505743 out.go:252]   - Booting up control plane ...
	I1002 08:05:44.077739  505743 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 08:05:44.077841  505743 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 08:05:44.077916  505743 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 08:05:44.078028  505743 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 08:05:44.078139  505743 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 08:05:44.078267  505743 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 08:05:44.078364  505743 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 08:05:44.078411  505743 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 08:05:44.078556  505743 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 08:05:44.078680  505743 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 08:05:44.078746  505743 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501717371s
	I1002 08:05:44.078844  505743 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 08:05:44.078931  505743 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 08:05:44.079028  505743 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 08:05:44.079152  505743 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 08:05:44.079234  505743 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.115324492s
	I1002 08:05:44.079303  505743 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.85277951s
	I1002 08:05:44.079377  505743 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002546178s
	I1002 08:05:44.079497  505743 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 08:05:44.079631  505743 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 08:05:44.079719  505743 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 08:05:44.079920  505743 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-009374 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 08:05:44.079981  505743 kubeadm.go:318] [bootstrap-token] Using token: tre844.97ebvftte9n7mk7q
	I1002 08:05:44.083064  505743 out.go:252]   - Configuring RBAC rules ...
	I1002 08:05:44.083221  505743 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 08:05:44.083310  505743 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 08:05:44.083460  505743 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 08:05:44.083605  505743 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 08:05:44.083731  505743 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 08:05:44.083824  505743 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 08:05:44.083957  505743 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 08:05:44.084010  505743 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 08:05:44.084063  505743 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 08:05:44.084070  505743 kubeadm.go:318] 
	I1002 08:05:44.084147  505743 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 08:05:44.084158  505743 kubeadm.go:318] 
	I1002 08:05:44.084238  505743 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 08:05:44.084246  505743 kubeadm.go:318] 
	I1002 08:05:44.084273  505743 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 08:05:44.084339  505743 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 08:05:44.084399  505743 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 08:05:44.084412  505743 kubeadm.go:318] 
	I1002 08:05:44.084469  505743 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 08:05:44.084477  505743 kubeadm.go:318] 
	I1002 08:05:44.084527  505743 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 08:05:44.084535  505743 kubeadm.go:318] 
	I1002 08:05:44.084591  505743 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 08:05:44.084672  505743 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 08:05:44.084747  505743 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 08:05:44.084755  505743 kubeadm.go:318] 
	I1002 08:05:44.084844  505743 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 08:05:44.084929  505743 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 08:05:44.084937  505743 kubeadm.go:318] 
	I1002 08:05:44.085026  505743 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tre844.97ebvftte9n7mk7q \
	I1002 08:05:44.085137  505743 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 08:05:44.085164  505743 kubeadm.go:318] 	--control-plane 
	I1002 08:05:44.085171  505743 kubeadm.go:318] 
	I1002 08:05:44.085261  505743 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 08:05:44.085269  505743 kubeadm.go:318] 
	I1002 08:05:44.085355  505743 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tre844.97ebvftte9n7mk7q \
	I1002 08:05:44.085479  505743 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 08:05:44.085491  505743 cni.go:84] Creating CNI manager for ""
	I1002 08:05:44.085498  505743 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:05:44.088521  505743 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 08:05:43.020395  501823 pod_ready.go:83] waiting for pod "kube-proxy-g6hc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:43.420222  501823 pod_ready.go:94] pod "kube-proxy-g6hc4" is "Ready"
	I1002 08:05:43.420263  501823 pod_ready.go:86] duration metric: took 399.783731ms for pod "kube-proxy-g6hc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:43.620589  501823 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:44.019358  501823 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417078" is "Ready"
	I1002 08:05:44.019384  501823 pod_ready.go:86] duration metric: took 398.769822ms for pod "kube-scheduler-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:05:44.019399  501823 pod_ready.go:40] duration metric: took 1.604437667s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:05:44.107126  501823 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:05:44.110302  501823 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417078" cluster and "default" namespace by default
	I1002 08:05:44.091480  505743 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 08:05:44.096760  505743 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 08:05:44.096783  505743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 08:05:44.120326  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 08:05:44.708936  505743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 08:05:44.709082  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:44.709162  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-009374 minikube.k8s.io/updated_at=2025_10_02T08_05_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=newest-cni-009374 minikube.k8s.io/primary=true
	I1002 08:05:44.917353  505743 ops.go:34] apiserver oom_adj: -16
	I1002 08:05:44.917468  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:45.417763  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:45.917804  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:46.417700  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:46.918017  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:47.418059  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:47.917591  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:48.418153  505743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:05:48.554171  505743 kubeadm.go:1113] duration metric: took 3.845142219s to wait for elevateKubeSystemPrivileges
	I1002 08:05:48.554197  505743 kubeadm.go:402] duration metric: took 25.252431982s to StartCluster
	I1002 08:05:48.554214  505743 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:48.554274  505743 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:05:48.555268  505743 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:05:48.555504  505743 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:05:48.555588  505743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 08:05:48.555852  505743 config.go:182] Loaded profile config "newest-cni-009374": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:05:48.555891  505743 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:05:48.555949  505743 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-009374"
	I1002 08:05:48.555963  505743 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-009374"
	I1002 08:05:48.555983  505743 host.go:66] Checking if "newest-cni-009374" exists ...
	I1002 08:05:48.556680  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:48.557035  505743 addons.go:69] Setting default-storageclass=true in profile "newest-cni-009374"
	I1002 08:05:48.557053  505743 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-009374"
	I1002 08:05:48.557344  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:48.560778  505743 out.go:179] * Verifying Kubernetes components...
	I1002 08:05:48.567205  505743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:05:48.592829  505743 addons.go:238] Setting addon default-storageclass=true in "newest-cni-009374"
	I1002 08:05:48.592870  505743 host.go:66] Checking if "newest-cni-009374" exists ...
	I1002 08:05:48.593302  505743 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:05:48.603239  505743 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:05:48.613074  505743 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:05:48.613100  505743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:05:48.613177  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:48.637067  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:48.643229  505743 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:05:48.643251  505743 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:05:48.643325  505743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:05:48.675134  505743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:05:48.992074  505743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:05:49.070131  505743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 08:05:49.070340  505743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:05:49.125005  505743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:05:49.836295  505743 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 08:05:49.839577  505743 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:05:49.839669  505743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:05:49.859202  505743 api_server.go:72] duration metric: took 1.303671575s to wait for apiserver process to appear ...
	I1002 08:05:49.859278  505743 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:05:49.859310  505743 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 08:05:49.888485  505743 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 08:05:49.892848  505743 api_server.go:141] control plane version: v1.34.1
	I1002 08:05:49.892875  505743 api_server.go:131] duration metric: took 33.574415ms to wait for apiserver health ...
	I1002 08:05:49.892884  505743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:05:49.895481  505743 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 08:05:49.897198  505743 system_pods.go:59] 9 kube-system pods found
	I1002 08:05:49.897232  505743 system_pods.go:61] "coredns-66bc5c9577-p2j8l" [a810de8d-b66f-404e-8b14-911266df5272] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:05:49.897241  505743 system_pods.go:61] "coredns-66bc5c9577-vfgvv" [2ee2a4e0-4f16-4a78-b0ab-8ec1b8e98193] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:05:49.897249  505743 system_pods.go:61] "etcd-newest-cni-009374" [cabdca96-8777-4057-9e06-1781a4bca780] Running
	I1002 08:05:49.897253  505743 system_pods.go:61] "kindnet-f45p7" [c9cf92b3-8ccb-4487-b783-29df2834d679] Running
	I1002 08:05:49.897266  505743 system_pods.go:61] "kube-apiserver-newest-cni-009374" [986bf8bd-e659-4a96-9fa6-55f2e838b6dd] Running
	I1002 08:05:49.897274  505743 system_pods.go:61] "kube-controller-manager-newest-cni-009374" [b41b9bc3-59aa-4596-9d21-207dfe86cf1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:05:49.897278  505743 system_pods.go:61] "kube-proxy-qsv24" [db609c90-476d-450d-a43d-0600b893f712] Running
	I1002 08:05:49.897284  505743 system_pods.go:61] "kube-scheduler-newest-cni-009374" [5e2e0730-38ef-4779-a6a6-0fe4a374388f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:05:49.897288  505743 system_pods.go:61] "storage-provisioner" [187ddc8e-cf7d-471a-b913-c757e198b82a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:05:49.897295  505743 system_pods.go:74] duration metric: took 4.405719ms to wait for pod list to return data ...
	I1002 08:05:49.897303  505743 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:05:49.899191  505743 addons.go:514] duration metric: took 1.343283329s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 08:05:49.900809  505743 default_sa.go:45] found service account: "default"
	I1002 08:05:49.900836  505743 default_sa.go:55] duration metric: took 3.526268ms for default service account to be created ...
	I1002 08:05:49.900849  505743 kubeadm.go:586] duration metric: took 1.345321847s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 08:05:49.900865  505743 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:05:49.907440  505743 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:05:49.907472  505743 node_conditions.go:123] node cpu capacity is 2
	I1002 08:05:49.907486  505743 node_conditions.go:105] duration metric: took 6.61529ms to run NodePressure ...
	I1002 08:05:49.907498  505743 start.go:241] waiting for startup goroutines ...
	I1002 08:05:50.340372  505743 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-009374" context rescaled to 1 replicas
	I1002 08:05:50.340415  505743 start.go:246] waiting for cluster config update ...
	I1002 08:05:50.340428  505743 start.go:255] writing updated cluster config ...
	I1002 08:05:50.340733  505743 ssh_runner.go:195] Run: rm -f paused
	I1002 08:05:50.398894  505743 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:05:50.403481  505743 out.go:179] * Done! kubectl is now configured to use "newest-cni-009374" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 08:05:41 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:41.535981795Z" level=info msg="Created container 1c6aabc2ede0a5ac620f9f10b07fdbb36ce483628fd6227f0b5247412491cd98: kube-system/coredns-66bc5c9577-cscrn/coredns" id=1b3ec9ce-a882-43a9-9174-cac0c2abf0fb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:05:41 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:41.538772763Z" level=info msg="Starting container: 1c6aabc2ede0a5ac620f9f10b07fdbb36ce483628fd6227f0b5247412491cd98" id=36c1d0e3-8088-43e3-a649-95ae15ae6089 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:05:41 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:41.552144882Z" level=info msg="Started container" PID=1720 containerID=1c6aabc2ede0a5ac620f9f10b07fdbb36ce483628fd6227f0b5247412491cd98 description=kube-system/coredns-66bc5c9577-cscrn/coredns id=36c1d0e3-8088-43e3-a649-95ae15ae6089 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48e861b6c53d22cd357ce9002b6f27e852113b266bf1d6610be14add752b3a4e
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.729380096Z" level=info msg="Running pod sandbox: default/busybox/POD" id=25c2ea27-95b7-4937-bfc0-7ec382a416a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.729454139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.742433968Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d1c066914ad826252280b9abe5ecaa73c6669f2f3e7b10b98261238717dff588 UID:c863efda-3502-432b-8d0a-03bbb8b70f5e NetNS:/var/run/netns/9153d25c-1f5e-4bcb-babf-b16f70bf2d87 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d150}] Aliases:map[]}"
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.742624764Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.756320529Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d1c066914ad826252280b9abe5ecaa73c6669f2f3e7b10b98261238717dff588 UID:c863efda-3502-432b-8d0a-03bbb8b70f5e NetNS:/var/run/netns/9153d25c-1f5e-4bcb-babf-b16f70bf2d87 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d150}] Aliases:map[]}"
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.756497761Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.764000165Z" level=info msg="Ran pod sandbox d1c066914ad826252280b9abe5ecaa73c6669f2f3e7b10b98261238717dff588 with infra container: default/busybox/POD" id=25c2ea27-95b7-4937-bfc0-7ec382a416a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.765276403Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=33f10538-e2ee-48af-8d85-d4b1505de37f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.765467108Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=33f10538-e2ee-48af-8d85-d4b1505de37f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.765515502Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=33f10538-e2ee-48af-8d85-d4b1505de37f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.766675373Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=35f4c21d-201f-49c3-92de-384f531d4dba name=/runtime.v1.ImageService/PullImage
	Oct 02 08:05:44 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:44.76940248Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.782377173Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=35f4c21d-201f-49c3-92de-384f531d4dba name=/runtime.v1.ImageService/PullImage
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.783553332Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5466246d-9b70-49c8-8edd-f3313b5b9400 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.785485534Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=deb1aaf8-a499-4bdc-a767-28fade0227b5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.79223918Z" level=info msg="Creating container: default/busybox/busybox" id=20097b04-efde-4fed-87c4-40d8c541becc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.79305961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.797863322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.798389087Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.813956911Z" level=info msg="Created container f6e357cce7458bfab83327e000f6819088e1d3ff09853530c3cad2eb6ab96267: default/busybox/busybox" id=20097b04-efde-4fed-87c4-40d8c541becc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.815403005Z" level=info msg="Starting container: f6e357cce7458bfab83327e000f6819088e1d3ff09853530c3cad2eb6ab96267" id=2a435200-5c83-4cc6-8677-8bf3a1d087b9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:05:46 default-k8s-diff-port-417078 crio[836]: time="2025-10-02T08:05:46.818404452Z" level=info msg="Started container" PID=1775 containerID=f6e357cce7458bfab83327e000f6819088e1d3ff09853530c3cad2eb6ab96267 description=default/busybox/busybox id=2a435200-5c83-4cc6-8677-8bf3a1d087b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1c066914ad826252280b9abe5ecaa73c6669f2f3e7b10b98261238717dff588
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	f6e357cce7458       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   d1c066914ad82       busybox                                                default
	1c6aabc2ede0a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   48e861b6c53d2       coredns-66bc5c9577-cscrn                               kube-system
	f703f5c2c27bc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   5a0db99aae14d       storage-provisioner                                    kube-system
	1d9c051b548ab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   27af3a16d57f3       kube-proxy-g6hc4                                       kube-system
	63b644483da2e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   4adcdcbecabf0       kindnet-xvmxj                                          kube-system
	6705d5fc5ea31       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   9e4a7519ee9c4       kube-apiserver-default-k8s-diff-port-417078            kube-system
	7f9dd328faa0a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   ef8317c536c6d       kube-controller-manager-default-k8s-diff-port-417078   kube-system
	f67546e869860       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   9c7ba515a5ca5       etcd-default-k8s-diff-port-417078                      kube-system
	e5e019aead962       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   0f7fd4858f299       kube-scheduler-default-k8s-diff-port-417078            kube-system
	
	
	==> coredns [1c6aabc2ede0a5ac620f9f10b07fdbb36ce483628fd6227f0b5247412491cd98] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47789 - 20921 "HINFO IN 8831480284042988873.496471419729034834. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004056324s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-417078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-417078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=default-k8s-diff-port-417078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_04_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:04:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-417078
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:05:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:05:45 +0000   Thu, 02 Oct 2025 08:04:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:05:45 +0000   Thu, 02 Oct 2025 08:04:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:05:45 +0000   Thu, 02 Oct 2025 08:04:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:05:45 +0000   Thu, 02 Oct 2025 08:05:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-417078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7f4eadfd161449495eae1681cf1fa9d
	  System UUID:                f4fac9d3-943a-43ee-b70b-67637923d71e
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-cscrn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-417078                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-xvmxj                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-417078             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-417078    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-g6hc4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-417078             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-417078 event: Registered Node default-k8s-diff-port-417078 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-417078 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 07:36] overlayfs: idmapped layers are currently not supported
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:05] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f67546e86986096b5f5b78870dce090efdac61be2c5602666e87ba27424281a4] <==
	{"level":"warn","ts":"2025-10-02T08:04:49.645771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.679524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.684902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.702496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.721535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.738855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.758250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.781900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.793879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.820975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.830230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.850368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.880009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.900366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.913826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.930461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.948343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.978502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:49.989282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:50.012277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:50.033077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:50.069750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:50.114506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:50.141527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:04:50.199250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52354","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:05:54 up  2:48,  0 user,  load average: 3.06, 3.02, 2.29
	Linux default-k8s-diff-port-417078 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [63b644483da2e0c05555cf548492349577c3b74f85ed4875971d5735300168be] <==
	I1002 08:05:00.721149       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:05:00.721724       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 08:05:00.721883       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:05:00.721897       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:05:00.721914       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:05:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:05:00.999692       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:05:00.999717       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:05:00.999727       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:05:01.001731       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:05:30.935898       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 08:05:31.000675       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 08:05:31.000822       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 08:05:31.003377       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 08:05:32.500253       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:05:32.500362       1 metrics.go:72] Registering metrics
	I1002 08:05:32.500456       1 controller.go:711] "Syncing nftables rules"
	I1002 08:05:40.939190       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:05:40.939309       1 main.go:301] handling current node
	I1002 08:05:50.937094       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:05:50.937131       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6705d5fc5ea31c03adf52cf68ca44396a7fcad7b41f58093097835cef73bbd92] <==
	I1002 08:04:51.238725       1 controller.go:667] quota admission added evaluator for: namespaces
	E1002 08:04:51.244606       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1002 08:04:51.315965       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:04:51.316892       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 08:04:51.333423       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:04:51.338924       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 08:04:51.457938       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:04:51.942493       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 08:04:51.954418       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 08:04:51.956546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:04:52.867534       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:04:52.927540       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:04:53.069600       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 08:04:53.079755       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 08:04:53.080968       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:04:53.089807       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:04:53.854074       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:04:53.873312       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:04:53.891430       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 08:04:53.906627       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 08:04:59.636835       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:04:59.644952       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:04:59.976348       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1002 08:05:00.115993       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1002 08:05:52.629839       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:34418: use of closed network connection
	
	
	==> kube-controller-manager [7f9dd328faa0abd244cc14eeae12ad0ef3a159c280d14abcc37dc199bbd4648d] <==
	I1002 08:04:59.002763       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 08:04:59.002894       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 08:04:59.002924       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 08:04:59.002930       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 08:04:59.002937       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 08:04:59.003389       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 08:04:59.012475       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 08:04:59.012588       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:04:59.012596       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 08:04:59.012604       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 08:04:59.013004       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 08:04:59.013072       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 08:04:59.013139       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-417078"
	I1002 08:04:59.013189       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 08:04:59.015707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 08:04:59.015963       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 08:04:59.016174       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 08:04:59.016976       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 08:04:59.018802       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 08:04:59.018828       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:04:59.018847       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 08:04:59.024747       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 08:04:59.025376       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:04:59.032584       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-417078" podCIDRs=["10.244.0.0/24"]
	I1002 08:05:44.300704       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1d9c051b548abdecd6865aa44cb7cc7769e8074e6ac4aca6bbe0067059899f85] <==
	I1002 08:05:01.250783       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:05:01.374325       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:05:01.474919       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:05:01.474963       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 08:05:01.475042       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:05:01.540495       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:05:01.540554       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:05:01.546797       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:05:01.547277       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:05:01.547292       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:05:01.549167       1 config.go:200] "Starting service config controller"
	I1002 08:05:01.549181       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:05:01.549201       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:05:01.549207       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:05:01.549227       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:05:01.549231       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:05:01.551538       1 config.go:309] "Starting node config controller"
	I1002 08:05:01.551557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:05:01.551594       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:05:01.652661       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:05:01.652711       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:05:01.652759       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e5e019aead96254a6dd864d71c7750f3d232faa13fb160295d0a56dedb895ab3] <==
	I1002 08:04:52.258330       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:04:52.265762       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:04:52.266313       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:04:52.266404       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:04:52.266452       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 08:04:52.277919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 08:04:52.278151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 08:04:52.278255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 08:04:52.285368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 08:04:52.285590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 08:04:52.286584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 08:04:52.286783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 08:04:52.286876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 08:04:52.286954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 08:04:52.287035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 08:04:52.287147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 08:04:52.287239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 08:04:52.287362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 08:04:52.287566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 08:04:52.287651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 08:04:52.287786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 08:04:52.287866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 08:04:52.287911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 08:04:52.287960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 08:04:53.666908       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 08:04:59 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:04:59.077554    1283 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 08:04:59 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:04:59.078692    1283 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:00.166920    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63b17498-7dca-45ba-81a8-4aa33302a8df-kube-proxy\") pod \"kube-proxy-g6hc4\" (UID: \"63b17498-7dca-45ba-81a8-4aa33302a8df\") " pod="kube-system/kube-proxy-g6hc4"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:00.167006    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8150ddc1-f400-422d-a0a6-3a42c58bec39-xtables-lock\") pod \"kindnet-xvmxj\" (UID: \"8150ddc1-f400-422d-a0a6-3a42c58bec39\") " pod="kube-system/kindnet-xvmxj"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:00.167032    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bp4g\" (UniqueName: \"kubernetes.io/projected/63b17498-7dca-45ba-81a8-4aa33302a8df-kube-api-access-5bp4g\") pod \"kube-proxy-g6hc4\" (UID: \"63b17498-7dca-45ba-81a8-4aa33302a8df\") " pod="kube-system/kube-proxy-g6hc4"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:00.167859    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63b17498-7dca-45ba-81a8-4aa33302a8df-xtables-lock\") pod \"kube-proxy-g6hc4\" (UID: \"63b17498-7dca-45ba-81a8-4aa33302a8df\") " pod="kube-system/kube-proxy-g6hc4"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:00.167974    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8150ddc1-f400-422d-a0a6-3a42c58bec39-cni-cfg\") pod \"kindnet-xvmxj\" (UID: \"8150ddc1-f400-422d-a0a6-3a42c58bec39\") " pod="kube-system/kindnet-xvmxj"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:00.168030    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvss\" (UniqueName: \"kubernetes.io/projected/8150ddc1-f400-422d-a0a6-3a42c58bec39-kube-api-access-bkvss\") pod \"kindnet-xvmxj\" (UID: \"8150ddc1-f400-422d-a0a6-3a42c58bec39\") " pod="kube-system/kindnet-xvmxj"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:00.168050    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63b17498-7dca-45ba-81a8-4aa33302a8df-lib-modules\") pod \"kube-proxy-g6hc4\" (UID: \"63b17498-7dca-45ba-81a8-4aa33302a8df\") " pod="kube-system/kube-proxy-g6hc4"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:00.168134    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8150ddc1-f400-422d-a0a6-3a42c58bec39-lib-modules\") pod \"kindnet-xvmxj\" (UID: \"8150ddc1-f400-422d-a0a6-3a42c58bec39\") " pod="kube-system/kindnet-xvmxj"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:00.343419    1283 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 08:05:00 default-k8s-diff-port-417078 kubelet[1283]: W1002 08:05:00.733113    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/crio-27af3a16d57f3dbdedf8c904fe164a274ca6b4cfde0b2be22590079a739beff1 WatchSource:0}: Error finding container 27af3a16d57f3dbdedf8c904fe164a274ca6b4cfde0b2be22590079a739beff1: Status 404 returned error can't find the container with id 27af3a16d57f3dbdedf8c904fe164a274ca6b4cfde0b2be22590079a739beff1
	Oct 02 08:05:02 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:02.004892    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xvmxj" podStartSLOduration=3.004868112 podStartE2EDuration="3.004868112s" podCreationTimestamp="2025-10-02 08:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:01.091671071 +0000 UTC m=+7.399544989" watchObservedRunningTime="2025-10-02 08:05:02.004868112 +0000 UTC m=+8.312742030"
	Oct 02 08:05:41 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:41.056353    1283 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 08:05:41 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:41.091632    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g6hc4" podStartSLOduration=42.091613743 podStartE2EDuration="42.091613743s" podCreationTimestamp="2025-10-02 08:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:02.005297802 +0000 UTC m=+8.313171737" watchObservedRunningTime="2025-10-02 08:05:41.091613743 +0000 UTC m=+47.399487661"
	Oct 02 08:05:41 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:41.202824    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/12bac59c-b28d-4401-8b03-fb5742196ee4-tmp\") pod \"storage-provisioner\" (UID: \"12bac59c-b28d-4401-8b03-fb5742196ee4\") " pod="kube-system/storage-provisioner"
	Oct 02 08:05:41 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:41.203174    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f16e8634-2bad-477e-8a6a-125d5982309c-config-volume\") pod \"coredns-66bc5c9577-cscrn\" (UID: \"f16e8634-2bad-477e-8a6a-125d5982309c\") " pod="kube-system/coredns-66bc5c9577-cscrn"
	Oct 02 08:05:41 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:41.203218    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6kln\" (UniqueName: \"kubernetes.io/projected/12bac59c-b28d-4401-8b03-fb5742196ee4-kube-api-access-f6kln\") pod \"storage-provisioner\" (UID: \"12bac59c-b28d-4401-8b03-fb5742196ee4\") " pod="kube-system/storage-provisioner"
	Oct 02 08:05:41 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:41.203251    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49lgs\" (UniqueName: \"kubernetes.io/projected/f16e8634-2bad-477e-8a6a-125d5982309c-kube-api-access-49lgs\") pod \"coredns-66bc5c9577-cscrn\" (UID: \"f16e8634-2bad-477e-8a6a-125d5982309c\") " pod="kube-system/coredns-66bc5c9577-cscrn"
	Oct 02 08:05:41 default-k8s-diff-port-417078 kubelet[1283]: W1002 08:05:41.418975    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/crio-5a0db99aae14d7b3cd56ea052c3662234f7b15998ea35b2b185b6d4e4901d129 WatchSource:0}: Error finding container 5a0db99aae14d7b3cd56ea052c3662234f7b15998ea35b2b185b6d4e4901d129: Status 404 returned error can't find the container with id 5a0db99aae14d7b3cd56ea052c3662234f7b15998ea35b2b185b6d4e4901d129
	Oct 02 08:05:41 default-k8s-diff-port-417078 kubelet[1283]: W1002 08:05:41.484025    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/crio-48e861b6c53d22cd357ce9002b6f27e852113b266bf1d6610be14add752b3a4e WatchSource:0}: Error finding container 48e861b6c53d22cd357ce9002b6f27e852113b266bf1d6610be14add752b3a4e: Status 404 returned error can't find the container with id 48e861b6c53d22cd357ce9002b6f27e852113b266bf1d6610be14add752b3a4e
	Oct 02 08:05:42 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:42.097602    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.097563985 podStartE2EDuration="40.097563985s" podCreationTimestamp="2025-10-02 08:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:42.097106134 +0000 UTC m=+48.404980069" watchObservedRunningTime="2025-10-02 08:05:42.097563985 +0000 UTC m=+48.405437935"
	Oct 02 08:05:44 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:44.418772    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cscrn" podStartSLOduration=44.418750304 podStartE2EDuration="44.418750304s" podCreationTimestamp="2025-10-02 08:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 08:05:42.126694002 +0000 UTC m=+48.434567928" watchObservedRunningTime="2025-10-02 08:05:44.418750304 +0000 UTC m=+50.726624222"
	Oct 02 08:05:44 default-k8s-diff-port-417078 kubelet[1283]: I1002 08:05:44.527981    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4spx\" (UniqueName: \"kubernetes.io/projected/c863efda-3502-432b-8d0a-03bbb8b70f5e-kube-api-access-m4spx\") pod \"busybox\" (UID: \"c863efda-3502-432b-8d0a-03bbb8b70f5e\") " pod="default/busybox"
	Oct 02 08:05:44 default-k8s-diff-port-417078 kubelet[1283]: W1002 08:05:44.762587    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/crio-d1c066914ad826252280b9abe5ecaa73c6669f2f3e7b10b98261238717dff588 WatchSource:0}: Error finding container d1c066914ad826252280b9abe5ecaa73c6669f2f3e7b10b98261238717dff588: Status 404 returned error can't find the container with id d1c066914ad826252280b9abe5ecaa73c6669f2f3e7b10b98261238717dff588
	
	
	==> storage-provisioner [f703f5c2c27bcc6a632d47bec80283d388c4a63d99d581351d8fe7169b474dca] <==
	I1002 08:05:41.617828       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 08:05:41.631520       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:05:41.632308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 08:05:41.635302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:41.642037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:05:41.642361       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:05:41.642594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417078_59c39b8e-86b5-4859-8823-0776e98ccb5f!
	I1002 08:05:41.645212       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3841bb3-24e2-47d7-9ba0-774032dd0ed1", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-417078_59c39b8e-86b5-4859-8823-0776e98ccb5f became leader
	W1002 08:05:41.645482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:41.668549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:05:41.746216       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417078_59c39b8e-86b5-4859-8823-0776e98ccb5f!
	W1002 08:05:43.671916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:43.678989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:45.682633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:45.687261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:47.689756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:47.694485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:49.698273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:49.703439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:51.707241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:51.716149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:53.723541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:05:53.730393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-417078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-009374 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-009374 --alsologtostderr -v=1: exit status 80 (2.433193537s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-009374 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 08:06:10.269054  512021 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:06:10.269167  512021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:10.269178  512021 out.go:374] Setting ErrFile to fd 2...
	I1002 08:06:10.269183  512021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:10.269510  512021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:06:10.269855  512021 out.go:368] Setting JSON to false
	I1002 08:06:10.269876  512021 mustload.go:65] Loading cluster: newest-cni-009374
	I1002 08:06:10.270426  512021 config.go:182] Loaded profile config "newest-cni-009374": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:06:10.270939  512021 cli_runner.go:164] Run: docker container inspect newest-cni-009374 --format={{.State.Status}}
	I1002 08:06:10.294227  512021 host.go:66] Checking if "newest-cni-009374" exists ...
	I1002 08:06:10.294567  512021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:06:10.358982  512021 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:64 SystemTime:2025-10-02 08:06:10.346180433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:06:10.359673  512021 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-009374 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 08:06:10.362962  512021 out.go:179] * Pausing node newest-cni-009374 ... 
	I1002 08:06:10.366621  512021 host.go:66] Checking if "newest-cni-009374" exists ...
	I1002 08:06:10.366966  512021 ssh_runner.go:195] Run: systemctl --version
	I1002 08:06:10.367021  512021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-009374
	I1002 08:06:10.384434  512021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/newest-cni-009374/id_rsa Username:docker}
	I1002 08:06:10.481858  512021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:06:10.494369  512021 pause.go:51] kubelet running: true
	I1002 08:06:10.494440  512021 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:06:10.730284  512021 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:06:10.730433  512021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:06:10.808184  512021 cri.go:89] found id: "906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8"
	I1002 08:06:10.808260  512021 cri.go:89] found id: "7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614"
	I1002 08:06:10.808280  512021 cri.go:89] found id: "d852a5ee6ab3f080654bd38770cf38424501162ccfb4ca29e7c0cb0043b44cc2"
	I1002 08:06:10.808305  512021 cri.go:89] found id: "5144281ff58cdfc1fa699a355d4776ad326aedf99dd6ba8aca036d3fe972c0a5"
	I1002 08:06:10.808329  512021 cri.go:89] found id: "904406b7e4779f3c8b32fac799a2d1a02b6113419125403d28efe5b8c0330869"
	I1002 08:06:10.808350  512021 cri.go:89] found id: "ca67d62c7642e459b742ee5666f23f57014ee5e56ecb1687a6ab0d9bf8ccc00b"
	I1002 08:06:10.808373  512021 cri.go:89] found id: ""
	I1002 08:06:10.808450  512021 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:06:10.821095  512021 retry.go:31] will retry after 330.034423ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:06:10Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:06:11.151623  512021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:06:11.168698  512021 pause.go:51] kubelet running: false
	I1002 08:06:11.168823  512021 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:06:11.331110  512021 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:06:11.331235  512021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:06:11.406602  512021 cri.go:89] found id: "906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8"
	I1002 08:06:11.406638  512021 cri.go:89] found id: "7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614"
	I1002 08:06:11.406644  512021 cri.go:89] found id: "d852a5ee6ab3f080654bd38770cf38424501162ccfb4ca29e7c0cb0043b44cc2"
	I1002 08:06:11.406649  512021 cri.go:89] found id: "5144281ff58cdfc1fa699a355d4776ad326aedf99dd6ba8aca036d3fe972c0a5"
	I1002 08:06:11.406652  512021 cri.go:89] found id: "904406b7e4779f3c8b32fac799a2d1a02b6113419125403d28efe5b8c0330869"
	I1002 08:06:11.406655  512021 cri.go:89] found id: "ca67d62c7642e459b742ee5666f23f57014ee5e56ecb1687a6ab0d9bf8ccc00b"
	I1002 08:06:11.406658  512021 cri.go:89] found id: ""
	I1002 08:06:11.406726  512021 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:06:11.417758  512021 retry.go:31] will retry after 202.54667ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:06:11Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:06:11.621201  512021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:06:11.635263  512021 pause.go:51] kubelet running: false
	I1002 08:06:11.635341  512021 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:06:11.811439  512021 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:06:11.811571  512021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:06:11.924766  512021 cri.go:89] found id: "906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8"
	I1002 08:06:11.924790  512021 cri.go:89] found id: "7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614"
	I1002 08:06:11.924796  512021 cri.go:89] found id: "d852a5ee6ab3f080654bd38770cf38424501162ccfb4ca29e7c0cb0043b44cc2"
	I1002 08:06:11.924800  512021 cri.go:89] found id: "5144281ff58cdfc1fa699a355d4776ad326aedf99dd6ba8aca036d3fe972c0a5"
	I1002 08:06:11.924803  512021 cri.go:89] found id: "904406b7e4779f3c8b32fac799a2d1a02b6113419125403d28efe5b8c0330869"
	I1002 08:06:11.924809  512021 cri.go:89] found id: "ca67d62c7642e459b742ee5666f23f57014ee5e56ecb1687a6ab0d9bf8ccc00b"
	I1002 08:06:11.924812  512021 cri.go:89] found id: ""
	I1002 08:06:11.924861  512021 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:06:11.939324  512021 retry.go:31] will retry after 392.594673ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:06:11Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:06:12.332935  512021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:06:12.347314  512021 pause.go:51] kubelet running: false
	I1002 08:06:12.347378  512021 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:06:12.517989  512021 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:06:12.518065  512021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:06:12.605021  512021 cri.go:89] found id: "906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8"
	I1002 08:06:12.605042  512021 cri.go:89] found id: "7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614"
	I1002 08:06:12.605046  512021 cri.go:89] found id: "d852a5ee6ab3f080654bd38770cf38424501162ccfb4ca29e7c0cb0043b44cc2"
	I1002 08:06:12.605050  512021 cri.go:89] found id: "5144281ff58cdfc1fa699a355d4776ad326aedf99dd6ba8aca036d3fe972c0a5"
	I1002 08:06:12.605058  512021 cri.go:89] found id: "904406b7e4779f3c8b32fac799a2d1a02b6113419125403d28efe5b8c0330869"
	I1002 08:06:12.605062  512021 cri.go:89] found id: "ca67d62c7642e459b742ee5666f23f57014ee5e56ecb1687a6ab0d9bf8ccc00b"
	I1002 08:06:12.605064  512021 cri.go:89] found id: ""
	I1002 08:06:12.605140  512021 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:06:12.621433  512021 out.go:203] 
	W1002 08:06:12.624388  512021 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:06:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:06:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 08:06:12.624416  512021 out.go:285] * 
	* 
	W1002 08:06:12.629936  512021 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 08:06:12.632963  512021 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-009374 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-009374
helpers_test.go:243: (dbg) docker inspect newest-cni-009374:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5",
	        "Created": "2025-10-02T08:05:13.541866609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 509386,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:05:54.967023544Z",
	            "FinishedAt": "2025-10-02T08:05:53.913668054Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/hosts",
	        "LogPath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5-json.log",
	        "Name": "/newest-cni-009374",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-009374:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-009374",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5",
	                "LowerDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-009374",
	                "Source": "/var/lib/docker/volumes/newest-cni-009374/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-009374",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-009374",
	                "name.minikube.sigs.k8s.io": "newest-cni-009374",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8701a8e5370c31688b6651aafe5adf8d6eb7cae56f214a17bca7a47f9206ab31",
	            "SandboxKey": "/var/run/docker/netns/8701a8e5370c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-009374": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d6:3e:e6:14:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76416bed3e9b57e23ee4e18e21c895059d8b16740e350a7d0407898e1cd7fb9e",
	                    "EndpointID": "6d98f392068854287e80f83b54b9531123704acaae8dd6e3a3e7d494a70b8c9e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-009374",
	                        "ccc6360467e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009374 -n newest-cni-009374
E1002 08:06:13.094296  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009374 -n newest-cni-009374: exit status 2 (454.766567ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-009374 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-009374 logs -n 25: (1.344469143s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ stop    │ -p embed-certs-171347 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-171347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ image   │ no-preload-604182 image list --format=json                                                                                                                                                                                                    │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p disable-driver-mounts-466206                                                                                                                                                                                                               │ disable-driver-mounts-466206 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:05 UTC │
	│ image   │ embed-certs-171347 image list --format=json                                                                                                                                                                                                   │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p embed-certs-171347 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable metrics-server -p newest-cni-009374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ stop    │ -p newest-cni-009374 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-009374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:06 UTC │
	│ stop    │ -p default-k8s-diff-port-417078 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-417078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │                     │
	│ image   │ newest-cni-009374 image list --format=json                                                                                                                                                                                                    │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ pause   │ -p newest-cni-009374 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:06:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:06:08.416118  511270 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:06:08.416363  511270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:08.416393  511270 out.go:374] Setting ErrFile to fd 2...
	I1002 08:06:08.416412  511270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:08.416710  511270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:06:08.417157  511270 out.go:368] Setting JSON to false
	I1002 08:06:08.418211  511270 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10120,"bootTime":1759382249,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:06:08.418314  511270 start.go:140] virtualization:  
	I1002 08:06:08.421839  511270 out.go:179] * [default-k8s-diff-port-417078] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:06:08.425249  511270 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:06:08.425323  511270 notify.go:220] Checking for updates...
	I1002 08:06:08.432422  511270 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:06:08.435498  511270 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:06:08.438917  511270 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:06:08.441719  511270 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:06:08.444708  511270 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:06:08.448014  511270 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:06:08.448562  511270 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:06:08.491373  511270 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:06:08.491501  511270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:06:08.596810  511270 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:06:08.586991683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:06:08.596922  511270 docker.go:318] overlay module found
	I1002 08:06:08.600079  511270 out.go:179] * Using the docker driver based on existing profile
	I1002 08:06:08.623950  509212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.679883433s)
	I1002 08:06:08.624014  509212 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.655420381s)
	I1002 08:06:08.624055  509212 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:06:08.624118  509212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:06:08.624208  509212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.517249873s)
	I1002 08:06:08.708169  509212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.317074623s)
	I1002 08:06:08.708323  509212 api_server.go:72] duration metric: took 6.111344439s to wait for apiserver process to appear ...
	I1002 08:06:08.708333  509212 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:06:08.708351  509212 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 08:06:08.712452  509212 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-009374 addons enable metrics-server
	
	I1002 08:06:08.716416  509212 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 08:06:08.602895  511270 start.go:304] selected driver: docker
	I1002 08:06:08.602914  511270 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:06:08.603007  511270 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:06:08.603698  511270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:06:08.717000  511270 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:06:08.704762503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:06:08.717355  511270 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:06:08.717381  511270 cni.go:84] Creating CNI manager for ""
	I1002 08:06:08.717435  511270 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:06:08.717620  511270 start.go:348] cluster config:
	{Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:06:08.721752  511270 out.go:179] * Starting "default-k8s-diff-port-417078" primary control-plane node in "default-k8s-diff-port-417078" cluster
	I1002 08:06:08.724551  511270 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:06:08.727517  511270 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:06:08.730336  511270 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:06:08.730398  511270 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:06:08.730413  511270 cache.go:58] Caching tarball of preloaded images
	I1002 08:06:08.730517  511270 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:06:08.730531  511270 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:06:08.730650  511270 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/config.json ...
	I1002 08:06:08.730877  511270 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:06:08.752628  511270 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:06:08.752651  511270 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:06:08.752668  511270 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:06:08.752689  511270 start.go:360] acquireMachinesLock for default-k8s-diff-port-417078: {Name:mk71638280421d86b548f4ec42a5f6c5c61e1f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:06:08.752764  511270 start.go:364] duration metric: took 47.566µs to acquireMachinesLock for "default-k8s-diff-port-417078"
	I1002 08:06:08.752791  511270 start.go:96] Skipping create...Using existing machine configuration
	I1002 08:06:08.752817  511270 fix.go:54] fixHost starting: 
	I1002 08:06:08.753084  511270 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:06:08.778118  511270 fix.go:112] recreateIfNeeded on default-k8s-diff-port-417078: state=Stopped err=<nil>
	W1002 08:06:08.778153  511270 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 08:06:08.721064  509212 addons.go:514] duration metric: took 6.123743553s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 08:06:08.723392  509212 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 08:06:08.723416  509212 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 08:06:09.208811  509212 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 08:06:09.219723  509212 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 08:06:09.221120  509212 api_server.go:141] control plane version: v1.34.1
	I1002 08:06:09.221150  509212 api_server.go:131] duration metric: took 512.810615ms to wait for apiserver health ...
	I1002 08:06:09.221160  509212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:06:09.226685  509212 system_pods.go:59] 8 kube-system pods found
	I1002 08:06:09.226728  509212 system_pods.go:61] "coredns-66bc5c9577-p2j8l" [a810de8d-b66f-404e-8b14-911266df5272] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:06:09.226739  509212 system_pods.go:61] "etcd-newest-cni-009374" [cabdca96-8777-4057-9e06-1781a4bca780] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:06:09.226745  509212 system_pods.go:61] "kindnet-f45p7" [c9cf92b3-8ccb-4487-b783-29df2834d679] Running
	I1002 08:06:09.226752  509212 system_pods.go:61] "kube-apiserver-newest-cni-009374" [986bf8bd-e659-4a96-9fa6-55f2e838b6dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:06:09.226758  509212 system_pods.go:61] "kube-controller-manager-newest-cni-009374" [b41b9bc3-59aa-4596-9d21-207dfe86cf1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:06:09.226764  509212 system_pods.go:61] "kube-proxy-qsv24" [db609c90-476d-450d-a43d-0600b893f712] Running
	I1002 08:06:09.226770  509212 system_pods.go:61] "kube-scheduler-newest-cni-009374" [5e2e0730-38ef-4779-a6a6-0fe4a374388f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:06:09.226775  509212 system_pods.go:61] "storage-provisioner" [187ddc8e-cf7d-471a-b913-c757e198b82a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:06:09.226788  509212 system_pods.go:74] duration metric: took 5.621385ms to wait for pod list to return data ...
	I1002 08:06:09.226800  509212 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:06:09.230289  509212 default_sa.go:45] found service account: "default"
	I1002 08:06:09.230317  509212 default_sa.go:55] duration metric: took 3.509858ms for default service account to be created ...
	I1002 08:06:09.230331  509212 kubeadm.go:586] duration metric: took 6.633353096s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 08:06:09.230348  509212 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:06:09.234131  509212 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:06:09.234176  509212 node_conditions.go:123] node cpu capacity is 2
	I1002 08:06:09.234191  509212 node_conditions.go:105] duration metric: took 3.838222ms to run NodePressure ...
	I1002 08:06:09.234203  509212 start.go:241] waiting for startup goroutines ...
	I1002 08:06:09.234211  509212 start.go:246] waiting for cluster config update ...
	I1002 08:06:09.234222  509212 start.go:255] writing updated cluster config ...
	I1002 08:06:09.234532  509212 ssh_runner.go:195] Run: rm -f paused
	I1002 08:06:09.334251  509212 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:06:09.337612  509212 out.go:179] * Done! kubectl is now configured to use "newest-cni-009374" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.262094128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.268050835Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=78470fa3-7c6d-4d5a-8160-d5198506a080 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.27138703Z" level=info msg="Ran pod sandbox 258889e65e54a8fc2dd9d50f5a8cb30580bba70e50f774f7873a327ed75701e8 with infra container: kube-system/kube-proxy-qsv24/POD" id=78470fa3-7c6d-4d5a-8160-d5198506a080 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.282210178Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=04ee83a6-e530-4c81-a7bf-a14cce910cf6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.283714356Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=708c1dc1-fd20-4b7d-84dd-c45c611fa618 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.284778539Z" level=info msg="Creating container: kube-system/kube-proxy-qsv24/kube-proxy" id=1bb7d60f-ceb3-4ce3-952a-b4f9c19ca700 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.285029757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.286979749Z" level=info msg="Running pod sandbox: kube-system/kindnet-f45p7/POD" id=887e8339-21ce-4a43-b518-e2087500151b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.287155849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.2933995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.2962407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.297473064Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=887e8339-21ce-4a43-b518-e2087500151b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.311822801Z" level=info msg="Ran pod sandbox 00161c3c1e91d4369e9a2c5ee93d6b072cece0e7be813803b471a8a446e14da9 with infra container: kube-system/kindnet-f45p7/POD" id=887e8339-21ce-4a43-b518-e2087500151b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.318527273Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d9bdc518-fc2c-409c-b5c6-a52b45b1b780 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.320505999Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d4a445c5-c659-4aef-9323-11191df3d9c1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.321762757Z" level=info msg="Creating container: kube-system/kindnet-f45p7/kindnet-cni" id=d90f94ff-4168-4313-bfc3-348f2cd2c118 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.322171565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.334738623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.335428771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.39078678Z" level=info msg="Created container 906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8: kube-system/kindnet-f45p7/kindnet-cni" id=d90f94ff-4168-4313-bfc3-348f2cd2c118 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.397188267Z" level=info msg="Starting container: 906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8" id=0c46cbf1-ed00-4461-917f-fabf7abf69d0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.406587172Z" level=info msg="Started container" PID=1066 containerID=906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8 description=kube-system/kindnet-f45p7/kindnet-cni id=0c46cbf1-ed00-4461-917f-fabf7abf69d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00161c3c1e91d4369e9a2c5ee93d6b072cece0e7be813803b471a8a446e14da9
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.418850358Z" level=info msg="Created container 7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614: kube-system/kube-proxy-qsv24/kube-proxy" id=1bb7d60f-ceb3-4ce3-952a-b4f9c19ca700 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.419780943Z" level=info msg="Starting container: 7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614" id=2a06993b-b786-4923-ad25-1a30b055c87a name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.427805904Z" level=info msg="Started container" PID=1064 containerID=7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614 description=kube-system/kube-proxy-qsv24/kube-proxy id=2a06993b-b786-4923-ad25-1a30b055c87a name=/runtime.v1.RuntimeService/StartContainer sandboxID=258889e65e54a8fc2dd9d50f5a8cb30580bba70e50f774f7873a327ed75701e8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	906400afe3b1c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   00161c3c1e91d       kindnet-f45p7                               kube-system
	7b299bd284d76       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   258889e65e54a       kube-proxy-qsv24                            kube-system
	d852a5ee6ab3f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   0ba77bf08c65b       kube-controller-manager-newest-cni-009374   kube-system
	5144281ff58cd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   4445a756796ba       etcd-newest-cni-009374                      kube-system
	904406b7e4779       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   a0dcacbba4473       kube-scheduler-newest-cni-009374            kube-system
	ca67d62c7642e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   2a1caaee0d3d6       kube-apiserver-newest-cni-009374            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-009374
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-009374
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=newest-cni-009374
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_05_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:05:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-009374
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:06:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:06:07 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:06:07 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:06:07 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 08:06:07 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-009374
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 584d1bfd6a1649509061241d2485e843
	  System UUID:                ee2e55db-e4b5-4d38-b86d-d81369f0c72d
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-009374                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-f45p7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-009374             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-009374    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-qsv24                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-009374             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-009374 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-009374 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-009374 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-009374 event: Registered Node newest-cni-009374 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 13s)  kubelet          Node newest-cni-009374 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 13s)  kubelet          Node newest-cni-009374 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 13s)  kubelet          Node newest-cni-009374 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-009374 event: Registered Node newest-cni-009374 in Controller
	
	
	==> dmesg <==
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:05] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5144281ff58cdfc1fa699a355d4776ad326aedf99dd6ba8aca036d3fe972c0a5] <==
	{"level":"warn","ts":"2025-10-02T08:06:05.296084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.323396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.349335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.361388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.379997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.392844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.423620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.441257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.447540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.470225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.502528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.502793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.517181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.536759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.555906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.570277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.582785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.606610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.640896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.662886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.682567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.709053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.719948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.742655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.801738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59634","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:06:14 up  2:48,  0 user,  load average: 3.15, 3.04, 2.31
	Linux newest-cni-009374 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8] <==
	I1002 08:06:07.600594       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:06:07.600809       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 08:06:07.600915       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:06:07.600927       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:06:07.600936       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:06:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:06:07.823706       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:06:07.823728       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:06:07.823737       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:06:07.824032       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [ca67d62c7642e459b742ee5666f23f57014ee5e56ecb1687a6ab0d9bf8ccc00b] <==
	I1002 08:06:06.988794       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 08:06:07.007066       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:06:07.008020       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:06:07.009300       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 08:06:07.010180       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 08:06:07.010196       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 08:06:07.010952       1 aggregator.go:171] initial CRD sync complete...
	I1002 08:06:07.010971       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 08:06:07.010981       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 08:06:07.010988       1 cache.go:39] Caches are synced for autoregister controller
	I1002 08:06:07.024507       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 08:06:07.052449       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 08:06:07.087207       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1002 08:06:07.151586       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 08:06:07.500715       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:06:08.158100       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:06:08.368416       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:06:08.420867       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:06:08.440421       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:06:08.646260       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.98.185"}
	I1002 08:06:08.690955       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.80.80"}
	I1002 08:06:11.559904       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 08:06:11.610498       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:06:11.716397       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:06:11.761472       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [d852a5ee6ab3f080654bd38770cf38424501162ccfb4ca29e7c0cb0043b44cc2] <==
	I1002 08:06:11.155933       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 08:06:11.155978       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 08:06:11.157770       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:06:11.156409       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 08:06:11.157857       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 08:06:11.164393       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 08:06:11.156075       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 08:06:11.164436       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 08:06:11.164445       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 08:06:11.168978       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 08:06:11.175645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 08:06:11.177807       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 08:06:11.187294       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:06:11.187417       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 08:06:11.192776       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 08:06:11.192897       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 08:06:11.192952       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 08:06:11.192984       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 08:06:11.193012       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 08:06:11.204964       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 08:06:11.205096       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 08:06:11.205204       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 08:06:11.205315       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-009374"
	I1002 08:06:11.206188       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 08:06:11.211283       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614] <==
	I1002 08:06:07.634753       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:06:08.113355       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:06:08.222696       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:06:08.237908       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 08:06:08.238005       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:06:08.396061       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:06:08.396262       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:06:08.426645       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:06:08.427117       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:06:08.455252       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:06:08.519934       1 config.go:200] "Starting service config controller"
	I1002 08:06:08.519956       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:06:08.519981       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:06:08.519986       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:06:08.519998       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:06:08.520002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:06:08.537607       1 config.go:309] "Starting node config controller"
	I1002 08:06:08.537625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:06:08.537633       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:06:08.622879       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:06:08.622923       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:06:08.623011       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [904406b7e4779f3c8b32fac799a2d1a02b6113419125403d28efe5b8c0330869] <==
	I1002 08:06:05.449503       1 serving.go:386] Generated self-signed cert in-memory
	I1002 08:06:07.253006       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 08:06:07.257921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:06:07.308024       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:06:07.308100       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 08:06:07.308123       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 08:06:07.308159       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:06:07.310373       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:07.310387       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:07.310405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:07.310412       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:07.410760       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:07.410803       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:07.431266       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 02 08:06:06 newest-cni-009374 kubelet[727]: I1002 08:06:06.740059     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:06:06 newest-cni-009374 kubelet[727]: I1002 08:06:06.813676     727 apiserver.go:52] "Watching apiserver"
	Oct 02 08:06:06 newest-cni-009374 kubelet[727]: I1002 08:06:06.946812     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:06:06 newest-cni-009374 kubelet[727]: I1002 08:06:06.961858     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.057247     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9cf92b3-8ccb-4487-b783-29df2834d679-xtables-lock\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.057303     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db609c90-476d-450d-a43d-0600b893f712-xtables-lock\") pod \"kube-proxy-qsv24\" (UID: \"db609c90-476d-450d-a43d-0600b893f712\") " pod="kube-system/kube-proxy-qsv24"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.057343     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9cf92b3-8ccb-4487-b783-29df2834d679-lib-modules\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.057992     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db609c90-476d-450d-a43d-0600b893f712-lib-modules\") pod \"kube-proxy-qsv24\" (UID: \"db609c90-476d-450d-a43d-0600b893f712\") " pod="kube-system/kube-proxy-qsv24"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.058055     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c9cf92b3-8ccb-4487-b783-29df2834d679-cni-cfg\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.066069     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.066176     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.066205     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.067072     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.103486     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-009374\" already exists" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.103781     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-009374\" already exists" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.103799     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.141313     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.200064     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-009374\" already exists" pod="kube-system/kube-apiserver-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.200100     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.232617     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-009374\" already exists" pod="kube-system/kube-controller-manager-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.232656     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.248615     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-009374\" already exists" pod="kube-system/kube-scheduler-newest-cni-009374"
	Oct 02 08:06:10 newest-cni-009374 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:06:10 newest-cni-009374 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:06:10 newest-cni-009374 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-009374 -n newest-cni-009374
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-009374 -n newest-cni-009374: exit status 2 (432.969229ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-009374 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-p2j8l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bq7rk kubernetes-dashboard-855c9754f9-sp7t2
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-009374 describe pod coredns-66bc5c9577-p2j8l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bq7rk kubernetes-dashboard-855c9754f9-sp7t2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-009374 describe pod coredns-66bc5c9577-p2j8l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bq7rk kubernetes-dashboard-855c9754f9-sp7t2: exit status 1 (128.50623ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-p2j8l" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-bq7rk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-sp7t2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-009374 describe pod coredns-66bc5c9577-p2j8l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bq7rk kubernetes-dashboard-855c9754f9-sp7t2: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-009374
helpers_test.go:243: (dbg) docker inspect newest-cni-009374:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5",
	        "Created": "2025-10-02T08:05:13.541866609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 509386,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:05:54.967023544Z",
	            "FinishedAt": "2025-10-02T08:05:53.913668054Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/hosts",
	        "LogPath": "/var/lib/docker/containers/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5/ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5-json.log",
	        "Name": "/newest-cni-009374",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-009374:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-009374",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccc6360467e366783e6999139cdfe4b770acfc2cfa95f674686aff67e6ec62f5",
	                "LowerDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c81039f87749c127db4fdc5061be5e43aead4cee26d5be1d059c6ccd3bfd6e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-009374",
	                "Source": "/var/lib/docker/volumes/newest-cni-009374/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-009374",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-009374",
	                "name.minikube.sigs.k8s.io": "newest-cni-009374",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8701a8e5370c31688b6651aafe5adf8d6eb7cae56f214a17bca7a47f9206ab31",
	            "SandboxKey": "/var/run/docker/netns/8701a8e5370c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-009374": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:d6:3e:e6:14:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76416bed3e9b57e23ee4e18e21c895059d8b16740e350a7d0407898e1cd7fb9e",
	                    "EndpointID": "6d98f392068854287e80f83b54b9531123704acaae8dd6e3a3e7d494a70b8c9e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-009374",
	                        "ccc6360467e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009374 -n newest-cni-009374
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009374 -n newest-cni-009374: exit status 2 (454.616847ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-009374 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-009374 logs -n 25: (1.503565589s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-171347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │                     │
	│ stop    │ -p embed-certs-171347 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ addons  │ enable dashboard -p embed-certs-171347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:03 UTC │
	│ start   │ -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:03 UTC │ 02 Oct 25 08:04 UTC │
	│ image   │ no-preload-604182 image list --format=json                                                                                                                                                                                                    │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p disable-driver-mounts-466206                                                                                                                                                                                                               │ disable-driver-mounts-466206 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:05 UTC │
	│ image   │ embed-certs-171347 image list --format=json                                                                                                                                                                                                   │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p embed-certs-171347 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable metrics-server -p newest-cni-009374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ stop    │ -p newest-cni-009374 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-009374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:06 UTC │
	│ stop    │ -p default-k8s-diff-port-417078 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-417078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │                     │
	│ image   │ newest-cni-009374 image list --format=json                                                                                                                                                                                                    │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ pause   │ -p newest-cni-009374 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:06:08
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:06:08.416118  511270 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:06:08.416363  511270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:08.416393  511270 out.go:374] Setting ErrFile to fd 2...
	I1002 08:06:08.416412  511270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:08.416710  511270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:06:08.417157  511270 out.go:368] Setting JSON to false
	I1002 08:06:08.418211  511270 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10120,"bootTime":1759382249,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:06:08.418314  511270 start.go:140] virtualization:  
	I1002 08:06:08.421839  511270 out.go:179] * [default-k8s-diff-port-417078] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:06:08.425249  511270 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:06:08.425323  511270 notify.go:220] Checking for updates...
	I1002 08:06:08.432422  511270 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:06:08.435498  511270 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:06:08.438917  511270 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:06:08.441719  511270 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:06:08.444708  511270 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:06:08.448014  511270 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:06:08.448562  511270 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:06:08.491373  511270 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:06:08.491501  511270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:06:08.596810  511270 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:06:08.586991683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:06:08.596922  511270 docker.go:318] overlay module found
	I1002 08:06:08.600079  511270 out.go:179] * Using the docker driver based on existing profile
	I1002 08:06:08.623950  509212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.679883433s)
	I1002 08:06:08.624014  509212 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.655420381s)
	I1002 08:06:08.624055  509212 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:06:08.624118  509212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:06:08.624208  509212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.517249873s)
	I1002 08:06:08.708169  509212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.317074623s)
	I1002 08:06:08.708323  509212 api_server.go:72] duration metric: took 6.111344439s to wait for apiserver process to appear ...
	I1002 08:06:08.708333  509212 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:06:08.708351  509212 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 08:06:08.712452  509212 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-009374 addons enable metrics-server
	
	I1002 08:06:08.716416  509212 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 08:06:08.602895  511270 start.go:304] selected driver: docker
	I1002 08:06:08.602914  511270 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:06:08.603007  511270 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:06:08.603698  511270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:06:08.717000  511270 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:06:08.704762503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:06:08.717355  511270 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:06:08.717381  511270 cni.go:84] Creating CNI manager for ""
	I1002 08:06:08.717435  511270 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:06:08.717620  511270 start.go:348] cluster config:
	{Name:default-k8s-diff-port-417078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-417078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:06:08.721752  511270 out.go:179] * Starting "default-k8s-diff-port-417078" primary control-plane node in "default-k8s-diff-port-417078" cluster
	I1002 08:06:08.724551  511270 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:06:08.727517  511270 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:06:08.730336  511270 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:06:08.730398  511270 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:06:08.730413  511270 cache.go:58] Caching tarball of preloaded images
	I1002 08:06:08.730517  511270 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:06:08.730531  511270 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:06:08.730650  511270 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/config.json ...
	I1002 08:06:08.730877  511270 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:06:08.752628  511270 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:06:08.752651  511270 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:06:08.752668  511270 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:06:08.752689  511270 start.go:360] acquireMachinesLock for default-k8s-diff-port-417078: {Name:mk71638280421d86b548f4ec42a5f6c5c61e1f89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:06:08.752764  511270 start.go:364] duration metric: took 47.566µs to acquireMachinesLock for "default-k8s-diff-port-417078"
	I1002 08:06:08.752791  511270 start.go:96] Skipping create...Using existing machine configuration
	I1002 08:06:08.752817  511270 fix.go:54] fixHost starting: 
	I1002 08:06:08.753084  511270 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:06:08.778118  511270 fix.go:112] recreateIfNeeded on default-k8s-diff-port-417078: state=Stopped err=<nil>
	W1002 08:06:08.778153  511270 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 08:06:08.721064  509212 addons.go:514] duration metric: took 6.123743553s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 08:06:08.723392  509212 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 08:06:08.723416  509212 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 08:06:09.208811  509212 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 08:06:09.219723  509212 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 08:06:09.221120  509212 api_server.go:141] control plane version: v1.34.1
	I1002 08:06:09.221150  509212 api_server.go:131] duration metric: took 512.810615ms to wait for apiserver health ...
	I1002 08:06:09.221160  509212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:06:09.226685  509212 system_pods.go:59] 8 kube-system pods found
	I1002 08:06:09.226728  509212 system_pods.go:61] "coredns-66bc5c9577-p2j8l" [a810de8d-b66f-404e-8b14-911266df5272] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:06:09.226739  509212 system_pods.go:61] "etcd-newest-cni-009374" [cabdca96-8777-4057-9e06-1781a4bca780] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:06:09.226745  509212 system_pods.go:61] "kindnet-f45p7" [c9cf92b3-8ccb-4487-b783-29df2834d679] Running
	I1002 08:06:09.226752  509212 system_pods.go:61] "kube-apiserver-newest-cni-009374" [986bf8bd-e659-4a96-9fa6-55f2e838b6dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:06:09.226758  509212 system_pods.go:61] "kube-controller-manager-newest-cni-009374" [b41b9bc3-59aa-4596-9d21-207dfe86cf1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:06:09.226764  509212 system_pods.go:61] "kube-proxy-qsv24" [db609c90-476d-450d-a43d-0600b893f712] Running
	I1002 08:06:09.226770  509212 system_pods.go:61] "kube-scheduler-newest-cni-009374" [5e2e0730-38ef-4779-a6a6-0fe4a374388f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:06:09.226775  509212 system_pods.go:61] "storage-provisioner" [187ddc8e-cf7d-471a-b913-c757e198b82a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 08:06:09.226788  509212 system_pods.go:74] duration metric: took 5.621385ms to wait for pod list to return data ...
	I1002 08:06:09.226800  509212 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:06:09.230289  509212 default_sa.go:45] found service account: "default"
	I1002 08:06:09.230317  509212 default_sa.go:55] duration metric: took 3.509858ms for default service account to be created ...
	I1002 08:06:09.230331  509212 kubeadm.go:586] duration metric: took 6.633353096s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 08:06:09.230348  509212 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:06:09.234131  509212 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:06:09.234176  509212 node_conditions.go:123] node cpu capacity is 2
	I1002 08:06:09.234191  509212 node_conditions.go:105] duration metric: took 3.838222ms to run NodePressure ...
	I1002 08:06:09.234203  509212 start.go:241] waiting for startup goroutines ...
	I1002 08:06:09.234211  509212 start.go:246] waiting for cluster config update ...
	I1002 08:06:09.234222  509212 start.go:255] writing updated cluster config ...
	I1002 08:06:09.234532  509212 ssh_runner.go:195] Run: rm -f paused
	I1002 08:06:09.334251  509212 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:06:09.337612  509212 out.go:179] * Done! kubectl is now configured to use "newest-cni-009374" cluster and "default" namespace by default
	I1002 08:06:08.781894  511270 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-417078" ...
	I1002 08:06:08.781981  511270 cli_runner.go:164] Run: docker start default-k8s-diff-port-417078
	I1002 08:06:09.086603  511270 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:06:09.115204  511270 kic.go:430] container "default-k8s-diff-port-417078" state is running.
	I1002 08:06:09.115602  511270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417078
	I1002 08:06:09.140557  511270 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/config.json ...
	I1002 08:06:09.140942  511270 machine.go:93] provisionDockerMachine start ...
	I1002 08:06:09.141077  511270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:06:09.162939  511270 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:09.163303  511270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1002 08:06:09.163323  511270 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:06:09.164040  511270 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33564->127.0.0.1:33443: read: connection reset by peer
	I1002 08:06:12.302909  511270 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417078
	
	I1002 08:06:12.302996  511270 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-417078"
	I1002 08:06:12.303117  511270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:06:12.321152  511270 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:12.321462  511270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1002 08:06:12.321474  511270 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-417078 && echo "default-k8s-diff-port-417078" | sudo tee /etc/hostname
	I1002 08:06:12.492040  511270 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-417078
	
	I1002 08:06:12.492202  511270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:06:12.517033  511270 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:12.519275  511270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1002 08:06:12.519310  511270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-417078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-417078/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-417078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:06:12.673093  511270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:06:12.673117  511270 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:06:12.673140  511270 ubuntu.go:190] setting up certificates
	I1002 08:06:12.673150  511270 provision.go:84] configureAuth start
	I1002 08:06:12.673205  511270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-417078
	I1002 08:06:12.702283  511270 provision.go:143] copyHostCerts
	I1002 08:06:12.702357  511270 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:06:12.702378  511270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:06:12.702451  511270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:06:12.702556  511270 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:06:12.702567  511270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:06:12.702594  511270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:06:12.702661  511270 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:06:12.702671  511270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:06:12.702696  511270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:06:12.702752  511270 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-417078 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-417078 localhost minikube]
	I1002 08:06:13.329529  511270 provision.go:177] copyRemoteCerts
	I1002 08:06:13.329682  511270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:06:13.329764  511270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:06:13.348402  511270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.262094128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.268050835Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=78470fa3-7c6d-4d5a-8160-d5198506a080 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.27138703Z" level=info msg="Ran pod sandbox 258889e65e54a8fc2dd9d50f5a8cb30580bba70e50f774f7873a327ed75701e8 with infra container: kube-system/kube-proxy-qsv24/POD" id=78470fa3-7c6d-4d5a-8160-d5198506a080 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.282210178Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=04ee83a6-e530-4c81-a7bf-a14cce910cf6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.283714356Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=708c1dc1-fd20-4b7d-84dd-c45c611fa618 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.284778539Z" level=info msg="Creating container: kube-system/kube-proxy-qsv24/kube-proxy" id=1bb7d60f-ceb3-4ce3-952a-b4f9c19ca700 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.285029757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.286979749Z" level=info msg="Running pod sandbox: kube-system/kindnet-f45p7/POD" id=887e8339-21ce-4a43-b518-e2087500151b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.287155849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.2933995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.2962407Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.297473064Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=887e8339-21ce-4a43-b518-e2087500151b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.311822801Z" level=info msg="Ran pod sandbox 00161c3c1e91d4369e9a2c5ee93d6b072cece0e7be813803b471a8a446e14da9 with infra container: kube-system/kindnet-f45p7/POD" id=887e8339-21ce-4a43-b518-e2087500151b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.318527273Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d9bdc518-fc2c-409c-b5c6-a52b45b1b780 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.320505999Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d4a445c5-c659-4aef-9323-11191df3d9c1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.321762757Z" level=info msg="Creating container: kube-system/kindnet-f45p7/kindnet-cni" id=d90f94ff-4168-4313-bfc3-348f2cd2c118 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.322171565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.334738623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.335428771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.39078678Z" level=info msg="Created container 906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8: kube-system/kindnet-f45p7/kindnet-cni" id=d90f94ff-4168-4313-bfc3-348f2cd2c118 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.397188267Z" level=info msg="Starting container: 906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8" id=0c46cbf1-ed00-4461-917f-fabf7abf69d0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.406587172Z" level=info msg="Started container" PID=1066 containerID=906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8 description=kube-system/kindnet-f45p7/kindnet-cni id=0c46cbf1-ed00-4461-917f-fabf7abf69d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00161c3c1e91d4369e9a2c5ee93d6b072cece0e7be813803b471a8a446e14da9
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.418850358Z" level=info msg="Created container 7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614: kube-system/kube-proxy-qsv24/kube-proxy" id=1bb7d60f-ceb3-4ce3-952a-b4f9c19ca700 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.419780943Z" level=info msg="Starting container: 7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614" id=2a06993b-b786-4923-ad25-1a30b055c87a name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:06:07 newest-cni-009374 crio[613]: time="2025-10-02T08:06:07.427805904Z" level=info msg="Started container" PID=1064 containerID=7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614 description=kube-system/kube-proxy-qsv24/kube-proxy id=2a06993b-b786-4923-ad25-1a30b055c87a name=/runtime.v1.RuntimeService/StartContainer sandboxID=258889e65e54a8fc2dd9d50f5a8cb30580bba70e50f774f7873a327ed75701e8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	906400afe3b1c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 seconds ago       Running             kindnet-cni               1                   00161c3c1e91d       kindnet-f45p7                               kube-system
	7b299bd284d76       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   9 seconds ago       Running             kube-proxy                1                   258889e65e54a       kube-proxy-qsv24                            kube-system
	d852a5ee6ab3f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   0ba77bf08c65b       kube-controller-manager-newest-cni-009374   kube-system
	5144281ff58cd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   4445a756796ba       etcd-newest-cni-009374                      kube-system
	904406b7e4779       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   a0dcacbba4473       kube-scheduler-newest-cni-009374            kube-system
	ca67d62c7642e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   2a1caaee0d3d6       kube-apiserver-newest-cni-009374            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-009374
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-009374
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=newest-cni-009374
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_05_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:05:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-009374
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:06:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:06:07 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:06:07 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:06:07 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 08:06:07 +0000   Thu, 02 Oct 2025 08:05:36 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-009374
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 584d1bfd6a1649509061241d2485e843
	  System UUID:                ee2e55db-e4b5-4d38-b86d-d81369f0c72d
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-009374                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-f45p7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-009374             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-009374    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-qsv24                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-009374             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 8s                 kube-proxy       
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-009374 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-009374 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-009374 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-009374 event: Registered Node newest-cni-009374 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 15s)  kubelet          Node newest-cni-009374 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 15s)  kubelet          Node newest-cni-009374 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 15s)  kubelet          Node newest-cni-009374 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-009374 event: Registered Node newest-cni-009374 in Controller
	
	
	==> dmesg <==
	[ +19.423688] overlayfs: idmapped layers are currently not supported
	[ +10.802067] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:05] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5144281ff58cdfc1fa699a355d4776ad326aedf99dd6ba8aca036d3fe972c0a5] <==
	{"level":"warn","ts":"2025-10-02T08:06:05.296084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.323396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.349335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.361388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.379997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.392844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.423620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.441257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.447540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.470225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.502528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.502793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.517181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.536759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.555906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.570277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.582785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.606610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.640896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.662886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.682567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.709053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.719948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.742655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:05.801738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59634","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:06:16 up  2:48,  0 user,  load average: 3.15, 3.04, 2.31
	Linux newest-cni-009374 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [906400afe3b1c23ce826dc1b1317eb26ec68c5106203bd2769ccd7c84427dde8] <==
	I1002 08:06:07.600594       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:06:07.600809       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 08:06:07.600915       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:06:07.600927       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:06:07.600936       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:06:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:06:07.823706       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:06:07.823728       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:06:07.823737       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:06:07.824032       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [ca67d62c7642e459b742ee5666f23f57014ee5e56ecb1687a6ab0d9bf8ccc00b] <==
	I1002 08:06:06.988794       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 08:06:07.007066       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:06:07.008020       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:06:07.009300       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 08:06:07.010180       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 08:06:07.010196       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 08:06:07.010952       1 aggregator.go:171] initial CRD sync complete...
	I1002 08:06:07.010971       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 08:06:07.010981       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 08:06:07.010988       1 cache.go:39] Caches are synced for autoregister controller
	I1002 08:06:07.024507       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 08:06:07.052449       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 08:06:07.087207       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1002 08:06:07.151586       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 08:06:07.500715       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:06:08.158100       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:06:08.368416       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:06:08.420867       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:06:08.440421       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:06:08.646260       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.98.185"}
	I1002 08:06:08.690955       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.80.80"}
	I1002 08:06:11.559904       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 08:06:11.610498       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 08:06:11.716397       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:06:11.761472       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [d852a5ee6ab3f080654bd38770cf38424501162ccfb4ca29e7c0cb0043b44cc2] <==
	I1002 08:06:11.155933       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 08:06:11.155978       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 08:06:11.157770       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:06:11.156409       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 08:06:11.157857       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 08:06:11.164393       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 08:06:11.156075       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 08:06:11.164436       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 08:06:11.164445       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 08:06:11.168978       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 08:06:11.175645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 08:06:11.177807       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 08:06:11.187294       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:06:11.187417       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 08:06:11.192776       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 08:06:11.192897       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 08:06:11.192952       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 08:06:11.192984       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 08:06:11.193012       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 08:06:11.204964       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 08:06:11.205096       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 08:06:11.205204       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 08:06:11.205315       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-009374"
	I1002 08:06:11.206188       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 08:06:11.211283       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [7b299bd284d76d1fbbc244e37a15ab48827810386560aa62783c2b8fd922a614] <==
	I1002 08:06:07.634753       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:06:08.113355       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:06:08.222696       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:06:08.237908       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 08:06:08.238005       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:06:08.396061       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:06:08.396262       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:06:08.426645       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:06:08.427117       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:06:08.455252       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:06:08.519934       1 config.go:200] "Starting service config controller"
	I1002 08:06:08.519956       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:06:08.519981       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:06:08.519986       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:06:08.519998       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:06:08.520002       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:06:08.537607       1 config.go:309] "Starting node config controller"
	I1002 08:06:08.537625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:06:08.537633       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:06:08.622879       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:06:08.622923       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:06:08.623011       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [904406b7e4779f3c8b32fac799a2d1a02b6113419125403d28efe5b8c0330869] <==
	I1002 08:06:05.449503       1 serving.go:386] Generated self-signed cert in-memory
	I1002 08:06:07.253006       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 08:06:07.257921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:06:07.308024       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:06:07.308100       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 08:06:07.308123       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 08:06:07.308159       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:06:07.310373       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:07.310387       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:07.310405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:07.310412       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:07.410760       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:07.410803       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:07.431266       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 02 08:06:06 newest-cni-009374 kubelet[727]: I1002 08:06:06.740059     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:06:06 newest-cni-009374 kubelet[727]: I1002 08:06:06.813676     727 apiserver.go:52] "Watching apiserver"
	Oct 02 08:06:06 newest-cni-009374 kubelet[727]: I1002 08:06:06.946812     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:06:06 newest-cni-009374 kubelet[727]: I1002 08:06:06.961858     727 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.057247     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9cf92b3-8ccb-4487-b783-29df2834d679-xtables-lock\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.057303     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db609c90-476d-450d-a43d-0600b893f712-xtables-lock\") pod \"kube-proxy-qsv24\" (UID: \"db609c90-476d-450d-a43d-0600b893f712\") " pod="kube-system/kube-proxy-qsv24"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.057343     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9cf92b3-8ccb-4487-b783-29df2834d679-lib-modules\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.057992     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db609c90-476d-450d-a43d-0600b893f712-lib-modules\") pod \"kube-proxy-qsv24\" (UID: \"db609c90-476d-450d-a43d-0600b893f712\") " pod="kube-system/kube-proxy-qsv24"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.058055     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c9cf92b3-8ccb-4487-b783-29df2834d679-cni-cfg\") pod \"kindnet-f45p7\" (UID: \"c9cf92b3-8ccb-4487-b783-29df2834d679\") " pod="kube-system/kindnet-f45p7"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.066069     727 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.066176     727 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.066205     727 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.067072     727 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.103486     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-009374\" already exists" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.103781     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-009374\" already exists" pod="kube-system/etcd-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.103799     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.141313     727 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.200064     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-009374\" already exists" pod="kube-system/kube-apiserver-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.200100     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.232617     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-009374\" already exists" pod="kube-system/kube-controller-manager-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: I1002 08:06:07.232656     727 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-009374"
	Oct 02 08:06:07 newest-cni-009374 kubelet[727]: E1002 08:06:07.248615     727 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-009374\" already exists" pod="kube-system/kube-scheduler-newest-cni-009374"
	Oct 02 08:06:10 newest-cni-009374 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:06:10 newest-cni-009374 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:06:10 newest-cni-009374 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-009374 -n newest-cni-009374
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-009374 -n newest-cni-009374: exit status 2 (455.33518ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-009374 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-p2j8l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bq7rk kubernetes-dashboard-855c9754f9-sp7t2
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-009374 describe pod coredns-66bc5c9577-p2j8l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bq7rk kubernetes-dashboard-855c9754f9-sp7t2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-009374 describe pod coredns-66bc5c9577-p2j8l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bq7rk kubernetes-dashboard-855c9754f9-sp7t2: exit status 1 (136.150223ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-p2j8l" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-bq7rk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-sp7t2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-009374 describe pod coredns-66bc5c9577-p2j8l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-bq7rk kubernetes-dashboard-855c9754f9-sp7t2: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-417078 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-417078 --alsologtostderr -v=1: exit status 80 (1.861210783s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-417078 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 08:07:13.652081  517453 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:07:13.652292  517453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:07:13.652325  517453 out.go:374] Setting ErrFile to fd 2...
	I1002 08:07:13.652347  517453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:07:13.652923  517453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:07:13.653317  517453 out.go:368] Setting JSON to false
	I1002 08:07:13.653393  517453 mustload.go:65] Loading cluster: default-k8s-diff-port-417078
	I1002 08:07:13.654186  517453 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:07:13.655250  517453 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-417078 --format={{.State.Status}}
	I1002 08:07:13.673389  517453 host.go:66] Checking if "default-k8s-diff-port-417078" exists ...
	I1002 08:07:13.673720  517453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:07:13.732115  517453 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 08:07:13.721630911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:07:13.732809  517453 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-417078 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 08:07:13.737055  517453 out.go:179] * Pausing node default-k8s-diff-port-417078 ... 
	I1002 08:07:13.740871  517453 host.go:66] Checking if "default-k8s-diff-port-417078" exists ...
	I1002 08:07:13.741241  517453 ssh_runner.go:195] Run: systemctl --version
	I1002 08:07:13.741298  517453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-417078
	I1002 08:07:13.758491  517453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/default-k8s-diff-port-417078/id_rsa Username:docker}
	I1002 08:07:13.862026  517453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:07:13.879989  517453 pause.go:51] kubelet running: true
	I1002 08:07:13.880090  517453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:07:14.146646  517453 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:07:14.146734  517453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:07:14.217871  517453 cri.go:89] found id: "60656c47cfe1b3b0b174507dbed097964a91a1226d4508163960b2e21510a0fe"
	I1002 08:07:14.217892  517453 cri.go:89] found id: "7857d6a2c27eedcc3e1e3425fc86feebd1ed00455b0b25e76849e78058d175a8"
	I1002 08:07:14.217899  517453 cri.go:89] found id: "1c2b537ef32d116dc218025592702865324dd99cf3c1c074eda8168c73deb8fb"
	I1002 08:07:14.217902  517453 cri.go:89] found id: "3e3390ef7a71ec7064e94b1c428bc44ed214876f28e31ea3bc944aab82217db4"
	I1002 08:07:14.217905  517453 cri.go:89] found id: "5e20db31d550901a0af4d1d01bbd43e4c4e376a5f51d16b6befe7b4fd80f53fc"
	I1002 08:07:14.217913  517453 cri.go:89] found id: "58e9ec4d181400c19075bad03bd7c590fa61e2f6e890fe6423d6ab1e2a40928d"
	I1002 08:07:14.217916  517453 cri.go:89] found id: "3fe23dab4fa0ba272028a64c70d3af8948cb437fb69796d50bf0133f85d526af"
	I1002 08:07:14.217919  517453 cri.go:89] found id: "51204ad2326f23863feeb5f81eec088fffc09135e7fccfb05c306b274a31f295"
	I1002 08:07:14.217922  517453 cri.go:89] found id: "3a01e925d0339bb867ed641377431c1c576bffc854679e92eb2e19a036a34feb"
	I1002 08:07:14.217929  517453 cri.go:89] found id: "89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283"
	I1002 08:07:14.217933  517453 cri.go:89] found id: "3424e6b891d1d444a5fd9113b3934912df22aa8b2559334195df2b60a5decea2"
	I1002 08:07:14.217936  517453 cri.go:89] found id: ""
	I1002 08:07:14.217987  517453 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:07:14.232366  517453 retry.go:31] will retry after 247.930743ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:07:14Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:07:14.480844  517453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:07:14.494788  517453 pause.go:51] kubelet running: false
	I1002 08:07:14.494876  517453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:07:14.687155  517453 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:07:14.687277  517453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:07:14.767717  517453 cri.go:89] found id: "60656c47cfe1b3b0b174507dbed097964a91a1226d4508163960b2e21510a0fe"
	I1002 08:07:14.767758  517453 cri.go:89] found id: "7857d6a2c27eedcc3e1e3425fc86feebd1ed00455b0b25e76849e78058d175a8"
	I1002 08:07:14.767764  517453 cri.go:89] found id: "1c2b537ef32d116dc218025592702865324dd99cf3c1c074eda8168c73deb8fb"
	I1002 08:07:14.767788  517453 cri.go:89] found id: "3e3390ef7a71ec7064e94b1c428bc44ed214876f28e31ea3bc944aab82217db4"
	I1002 08:07:14.767796  517453 cri.go:89] found id: "5e20db31d550901a0af4d1d01bbd43e4c4e376a5f51d16b6befe7b4fd80f53fc"
	I1002 08:07:14.767801  517453 cri.go:89] found id: "58e9ec4d181400c19075bad03bd7c590fa61e2f6e890fe6423d6ab1e2a40928d"
	I1002 08:07:14.767804  517453 cri.go:89] found id: "3fe23dab4fa0ba272028a64c70d3af8948cb437fb69796d50bf0133f85d526af"
	I1002 08:07:14.767808  517453 cri.go:89] found id: "51204ad2326f23863feeb5f81eec088fffc09135e7fccfb05c306b274a31f295"
	I1002 08:07:14.767812  517453 cri.go:89] found id: "3a01e925d0339bb867ed641377431c1c576bffc854679e92eb2e19a036a34feb"
	I1002 08:07:14.767828  517453 cri.go:89] found id: "89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283"
	I1002 08:07:14.767831  517453 cri.go:89] found id: "3424e6b891d1d444a5fd9113b3934912df22aa8b2559334195df2b60a5decea2"
	I1002 08:07:14.767834  517453 cri.go:89] found id: ""
	I1002 08:07:14.767897  517453 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:07:14.779883  517453 retry.go:31] will retry after 366.591643ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:07:14Z" level=error msg="open /run/runc: no such file or directory"
	I1002 08:07:15.147255  517453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:07:15.161872  517453 pause.go:51] kubelet running: false
	I1002 08:07:15.161958  517453 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 08:07:15.337923  517453 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 08:07:15.338032  517453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 08:07:15.421005  517453 cri.go:89] found id: "60656c47cfe1b3b0b174507dbed097964a91a1226d4508163960b2e21510a0fe"
	I1002 08:07:15.421028  517453 cri.go:89] found id: "7857d6a2c27eedcc3e1e3425fc86feebd1ed00455b0b25e76849e78058d175a8"
	I1002 08:07:15.421033  517453 cri.go:89] found id: "1c2b537ef32d116dc218025592702865324dd99cf3c1c074eda8168c73deb8fb"
	I1002 08:07:15.421037  517453 cri.go:89] found id: "3e3390ef7a71ec7064e94b1c428bc44ed214876f28e31ea3bc944aab82217db4"
	I1002 08:07:15.421041  517453 cri.go:89] found id: "5e20db31d550901a0af4d1d01bbd43e4c4e376a5f51d16b6befe7b4fd80f53fc"
	I1002 08:07:15.421044  517453 cri.go:89] found id: "58e9ec4d181400c19075bad03bd7c590fa61e2f6e890fe6423d6ab1e2a40928d"
	I1002 08:07:15.421047  517453 cri.go:89] found id: "3fe23dab4fa0ba272028a64c70d3af8948cb437fb69796d50bf0133f85d526af"
	I1002 08:07:15.421050  517453 cri.go:89] found id: "51204ad2326f23863feeb5f81eec088fffc09135e7fccfb05c306b274a31f295"
	I1002 08:07:15.421053  517453 cri.go:89] found id: "3a01e925d0339bb867ed641377431c1c576bffc854679e92eb2e19a036a34feb"
	I1002 08:07:15.421061  517453 cri.go:89] found id: "89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283"
	I1002 08:07:15.421065  517453 cri.go:89] found id: "3424e6b891d1d444a5fd9113b3934912df22aa8b2559334195df2b60a5decea2"
	I1002 08:07:15.421069  517453 cri.go:89] found id: ""
	I1002 08:07:15.421120  517453 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 08:07:15.436213  517453 out.go:203] 
	W1002 08:07:15.439454  517453 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:07:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T08:07:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 08:07:15.439473  517453 out.go:285] * 
	* 
	W1002 08:07:15.445164  517453 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 08:07:15.448381  517453 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-417078 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-417078
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-417078:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba",
	        "Created": "2025-10-02T08:04:28.399453084Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 511456,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:06:08.82098825Z",
	            "FinishedAt": "2025-10-02T08:06:07.611013554Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/hosts",
	        "LogPath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba-json.log",
	        "Name": "/default-k8s-diff-port-417078",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-417078:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-417078",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba",
	                "LowerDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-417078",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-417078/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-417078",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-417078",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-417078",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b505d990c2bfb9da36ccae88f4562aca24d2baeb18a5ce7d7e0e80cfe0597021",
	            "SandboxKey": "/var/run/docker/netns/b505d990c2bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-417078": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:9a:f4:17:64:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d1780ea11813add7386f7a8e327ace3f3a59d3c8ad3cf5599ed166ee793fe5a6",
	                    "EndpointID": "c1f2b8b72d37e2ae07cb2ee1b6a1ec68f4ac0c82fa34cc2d8f1dcaa4780ab38d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-417078",
	                        "9b8a295e3342"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078: exit status 2 (360.826773ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-417078 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-417078 logs -n 25: (1.30950162s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p disable-driver-mounts-466206                                                                                                                                                                                                               │ disable-driver-mounts-466206 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:05 UTC │
	│ image   │ embed-certs-171347 image list --format=json                                                                                                                                                                                                   │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p embed-certs-171347 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable metrics-server -p newest-cni-009374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ stop    │ -p newest-cni-009374 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-009374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:06 UTC │
	│ stop    │ -p default-k8s-diff-port-417078 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-417078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:07 UTC │
	│ image   │ newest-cni-009374 image list --format=json                                                                                                                                                                                                    │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ pause   │ -p newest-cni-009374 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │                     │
	│ delete  │ -p newest-cni-009374                                                                                                                                                                                                                          │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ delete  │ -p newest-cni-009374                                                                                                                                                                                                                          │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ start   │ -p auto-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-810803                  │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │                     │
	│ image   │ default-k8s-diff-port-417078 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:07 UTC │ 02 Oct 25 08:07 UTC │
	│ pause   │ -p default-k8s-diff-port-417078 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:06:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:06:20.837857  514309 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:06:20.838096  514309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:20.838126  514309 out.go:374] Setting ErrFile to fd 2...
	I1002 08:06:20.838145  514309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:20.838442  514309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:06:20.838914  514309 out.go:368] Setting JSON to false
	I1002 08:06:20.839956  514309 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10132,"bootTime":1759382249,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:06:20.840053  514309 start.go:140] virtualization:  
	I1002 08:06:20.844126  514309 out.go:179] * [auto-810803] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:06:20.848552  514309 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:06:20.848626  514309 notify.go:220] Checking for updates...
	I1002 08:06:20.855019  514309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:06:20.858166  514309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:06:20.861105  514309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:06:20.864039  514309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:06:20.866931  514309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:06:20.870294  514309 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:06:20.870393  514309 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:06:20.915774  514309 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:06:20.915896  514309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:06:21.033549  514309 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:06:21.02286989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:06:21.033656  514309 docker.go:318] overlay module found
	I1002 08:06:21.036826  514309 out.go:179] * Using the docker driver based on user configuration
	I1002 08:06:21.039672  514309 start.go:304] selected driver: docker
	I1002 08:06:21.039692  514309 start.go:924] validating driver "docker" against <nil>
	I1002 08:06:21.039706  514309 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:06:21.040440  514309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:06:21.139480  514309 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:06:21.129471336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:06:21.139633  514309 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 08:06:21.139862  514309 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:06:21.142828  514309 out.go:179] * Using Docker driver with root privileges
	I1002 08:06:21.145632  514309 cni.go:84] Creating CNI manager for ""
	I1002 08:06:21.145711  514309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:06:21.145721  514309 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 08:06:21.145806  514309 start.go:348] cluster config:
	{Name:auto-810803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1002 08:06:21.148954  514309 out.go:179] * Starting "auto-810803" primary control-plane node in "auto-810803" cluster
	I1002 08:06:21.151810  514309 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:06:21.154755  514309 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:06:21.157637  514309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:06:21.157707  514309 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:06:21.157716  514309 cache.go:58] Caching tarball of preloaded images
	I1002 08:06:21.157806  514309 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:06:21.157814  514309 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:06:21.157921  514309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/config.json ...
	I1002 08:06:21.157938  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/config.json: {Name:mka66e6efdbcad76fc2b29a7977775d2fbacd1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:21.158116  514309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:06:21.189144  514309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:06:21.189164  514309 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:06:21.189184  514309 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:06:21.189207  514309 start.go:360] acquireMachinesLock for auto-810803: {Name:mk08df67a7e417b0dfa95a73d23b98c7c3ff0065 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:06:21.189308  514309 start.go:364] duration metric: took 85.03µs to acquireMachinesLock for "auto-810803"
	I1002 08:06:21.189333  514309 start.go:93] Provisioning new machine with config: &{Name:auto-810803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:06:21.189398  514309 start.go:125] createHost starting for "" (driver="docker")
	I1002 08:06:18.474413  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 08:06:18.474436  511270 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 08:06:18.492131  511270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:06:18.507973  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 08:06:18.507993  511270 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 08:06:18.612766  511270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:06:18.643325  511270 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-417078" to be "Ready" ...
	I1002 08:06:18.667715  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 08:06:18.667780  511270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 08:06:18.676235  511270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:06:18.727168  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 08:06:18.727244  511270 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 08:06:18.899989  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 08:06:18.900015  511270 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 08:06:19.044775  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 08:06:19.044801  511270 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 08:06:19.080590  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 08:06:19.080617  511270 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 08:06:19.103731  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:06:19.103762  511270 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 08:06:19.124689  511270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:06:21.192947  514309 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 08:06:21.193184  514309 start.go:159] libmachine.API.Create for "auto-810803" (driver="docker")
	I1002 08:06:21.193239  514309 client.go:168] LocalClient.Create starting
	I1002 08:06:21.193308  514309 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 08:06:21.193341  514309 main.go:141] libmachine: Decoding PEM data...
	I1002 08:06:21.193357  514309 main.go:141] libmachine: Parsing certificate...
	I1002 08:06:21.193410  514309 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 08:06:21.193429  514309 main.go:141] libmachine: Decoding PEM data...
	I1002 08:06:21.193438  514309 main.go:141] libmachine: Parsing certificate...
	I1002 08:06:21.193792  514309 cli_runner.go:164] Run: docker network inspect auto-810803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 08:06:21.225942  514309 cli_runner.go:211] docker network inspect auto-810803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 08:06:21.226039  514309 network_create.go:284] running [docker network inspect auto-810803] to gather additional debugging logs...
	I1002 08:06:21.226056  514309 cli_runner.go:164] Run: docker network inspect auto-810803
	W1002 08:06:21.276914  514309 cli_runner.go:211] docker network inspect auto-810803 returned with exit code 1
	I1002 08:06:21.276941  514309 network_create.go:287] error running [docker network inspect auto-810803]: docker network inspect auto-810803: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-810803 not found
	I1002 08:06:21.276953  514309 network_create.go:289] output of [docker network inspect auto-810803]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-810803 not found
	
	** /stderr **
	I1002 08:06:21.277054  514309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:06:21.302297  514309 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 08:06:21.302674  514309 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 08:06:21.302819  514309 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 08:06:21.303120  514309 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d1780ea11813 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:84:d7:de:73:b2} reservation:<nil>}
	I1002 08:06:21.303542  514309 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a3ec0}
	I1002 08:06:21.303559  514309 network_create.go:124] attempt to create docker network auto-810803 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 08:06:21.303621  514309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-810803 auto-810803
	I1002 08:06:21.386427  514309 network_create.go:108] docker network auto-810803 192.168.85.0/24 created
	I1002 08:06:21.386475  514309 kic.go:121] calculated static IP "192.168.85.2" for the "auto-810803" container
	I1002 08:06:21.386551  514309 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 08:06:21.417854  514309 cli_runner.go:164] Run: docker volume create auto-810803 --label name.minikube.sigs.k8s.io=auto-810803 --label created_by.minikube.sigs.k8s.io=true
	I1002 08:06:21.448673  514309 oci.go:103] Successfully created a docker volume auto-810803
	I1002 08:06:21.448756  514309 cli_runner.go:164] Run: docker run --rm --name auto-810803-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-810803 --entrypoint /usr/bin/test -v auto-810803:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 08:06:22.193525  514309 oci.go:107] Successfully prepared a docker volume auto-810803
	I1002 08:06:22.193581  514309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:06:22.193601  514309 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 08:06:22.193667  514309 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-810803:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 08:06:24.237947  511270 node_ready.go:49] node "default-k8s-diff-port-417078" is "Ready"
	I1002 08:06:24.237974  511270 node_ready.go:38] duration metric: took 5.594615436s for node "default-k8s-diff-port-417078" to be "Ready" ...
	I1002 08:06:24.237991  511270 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:06:24.238068  511270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:06:26.708164  511270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.09536047s)
	I1002 08:06:26.708249  511270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.031945787s)
	I1002 08:06:27.142224  511270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.017491902s)
	I1002 08:06:27.142404  511270 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.904324053s)
	I1002 08:06:27.142425  511270 api_server.go:72] duration metric: took 9.157351893s to wait for apiserver process to appear ...
	I1002 08:06:27.142432  511270 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:06:27.142451  511270 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 08:06:27.145275  511270 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-417078 addons enable metrics-server
	
	I1002 08:06:27.148220  511270 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 08:06:27.151244  511270 addons.go:514] duration metric: took 9.165700869s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 08:06:27.160318  511270 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 08:06:27.160350  511270 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 08:06:27.642560  511270 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 08:06:27.671499  511270 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1002 08:06:27.676932  511270 api_server.go:141] control plane version: v1.34.1
	I1002 08:06:27.676992  511270 api_server.go:131] duration metric: took 534.537801ms to wait for apiserver health ...
	I1002 08:06:27.677003  511270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:06:27.704088  511270 system_pods.go:59] 8 kube-system pods found
	I1002 08:06:27.704123  511270 system_pods.go:61] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:06:27.704139  511270 system_pods.go:61] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:06:27.704147  511270 system_pods.go:61] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:06:27.704154  511270 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:06:27.704162  511270 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:06:27.704176  511270 system_pods.go:61] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:06:27.704184  511270 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:06:27.704198  511270 system_pods.go:61] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Running
	I1002 08:06:27.704209  511270 system_pods.go:74] duration metric: took 27.199291ms to wait for pod list to return data ...
	I1002 08:06:27.704218  511270 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:06:27.714516  511270 default_sa.go:45] found service account: "default"
	I1002 08:06:27.714555  511270 default_sa.go:55] duration metric: took 10.313408ms for default service account to be created ...
	I1002 08:06:27.714573  511270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:06:27.723140  511270 system_pods.go:86] 8 kube-system pods found
	I1002 08:06:27.723181  511270 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:06:27.723193  511270 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:06:27.723199  511270 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:06:27.723206  511270 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:06:27.723225  511270 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:06:27.723235  511270 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:06:27.723242  511270 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:06:27.723328  511270 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Running
	I1002 08:06:27.723378  511270 system_pods.go:126] duration metric: took 8.798145ms to wait for k8s-apps to be running ...
	I1002 08:06:27.723389  511270 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:06:27.723443  511270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:06:27.795508  511270 system_svc.go:56] duration metric: took 72.106997ms WaitForService to wait for kubelet
	I1002 08:06:27.795612  511270 kubeadm.go:586] duration metric: took 9.810535295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:06:27.795641  511270 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:06:27.803936  511270 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:06:27.803968  511270 node_conditions.go:123] node cpu capacity is 2
	I1002 08:06:27.803982  511270 node_conditions.go:105] duration metric: took 8.333583ms to run NodePressure ...
	I1002 08:06:27.803995  511270 start.go:241] waiting for startup goroutines ...
	I1002 08:06:27.804003  511270 start.go:246] waiting for cluster config update ...
	I1002 08:06:27.804013  511270 start.go:255] writing updated cluster config ...
	I1002 08:06:27.804283  511270 ssh_runner.go:195] Run: rm -f paused
	I1002 08:06:27.815749  511270 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:06:27.831532  511270 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cscrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:06:26.911147  514309 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-810803:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.717439615s)
	I1002 08:06:26.911180  514309 kic.go:203] duration metric: took 4.717575739s to extract preloaded images to volume ...
	W1002 08:06:26.911328  514309 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 08:06:26.911474  514309 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 08:06:27.021628  514309 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-810803 --name auto-810803 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-810803 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-810803 --network auto-810803 --ip 192.168.85.2 --volume auto-810803:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 08:06:27.395057  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Running}}
	I1002 08:06:27.427966  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:06:27.465068  514309 cli_runner.go:164] Run: docker exec auto-810803 stat /var/lib/dpkg/alternatives/iptables
	I1002 08:06:27.521709  514309 oci.go:144] the created container "auto-810803" has a running status.
	I1002 08:06:27.521748  514309 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa...
	I1002 08:06:28.811595  514309 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 08:06:28.831268  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:06:28.848908  514309 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 08:06:28.848932  514309 kic_runner.go:114] Args: [docker exec --privileged auto-810803 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 08:06:28.903331  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:06:28.927749  514309 machine.go:93] provisionDockerMachine start ...
	I1002 08:06:28.927858  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:28.962598  514309 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:28.962955  514309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1002 08:06:28.962972  514309 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:06:28.963533  514309 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47398->127.0.0.1:33448: read: connection reset by peer
	W1002 08:06:29.837135  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:31.839937  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:32.115586  514309 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-810803
	
	I1002 08:06:32.115650  514309 ubuntu.go:182] provisioning hostname "auto-810803"
	I1002 08:06:32.115747  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:32.139295  514309 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:32.139595  514309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1002 08:06:32.139612  514309 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-810803 && echo "auto-810803" | sudo tee /etc/hostname
	I1002 08:06:32.312049  514309 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-810803
	
	I1002 08:06:32.312141  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:32.336439  514309 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:32.336753  514309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1002 08:06:32.336778  514309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-810803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-810803/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-810803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:06:32.475490  514309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:06:32.475523  514309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:06:32.475546  514309 ubuntu.go:190] setting up certificates
	I1002 08:06:32.475558  514309 provision.go:84] configureAuth start
	I1002 08:06:32.475626  514309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810803
	I1002 08:06:32.504378  514309 provision.go:143] copyHostCerts
	I1002 08:06:32.504447  514309 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:06:32.504456  514309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:06:32.504550  514309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:06:32.504639  514309 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:06:32.504645  514309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:06:32.504670  514309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:06:32.504740  514309 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:06:32.504745  514309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:06:32.504768  514309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:06:32.504829  514309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.auto-810803 san=[127.0.0.1 192.168.85.2 auto-810803 localhost minikube]
	I1002 08:06:33.233209  514309 provision.go:177] copyRemoteCerts
	I1002 08:06:33.233289  514309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:06:33.233341  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:33.268751  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:33.373424  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:06:33.398568  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 08:06:33.426812  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 08:06:33.453113  514309 provision.go:87] duration metric: took 977.531018ms to configureAuth
	I1002 08:06:33.453142  514309 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:06:33.453330  514309 config.go:182] Loaded profile config "auto-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:06:33.453442  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:33.472731  514309 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:33.473045  514309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1002 08:06:33.473067  514309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:06:33.793019  514309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:06:33.793047  514309 machine.go:96] duration metric: took 4.865276876s to provisionDockerMachine
	I1002 08:06:33.793058  514309 client.go:171] duration metric: took 12.599813345s to LocalClient.Create
	I1002 08:06:33.793072  514309 start.go:167] duration metric: took 12.5998947s to libmachine.API.Create "auto-810803"
	I1002 08:06:33.793107  514309 start.go:293] postStartSetup for "auto-810803" (driver="docker")
	I1002 08:06:33.793125  514309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:06:33.793195  514309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:06:33.793263  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:33.828630  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:33.927845  514309 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:06:33.931939  514309 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:06:33.931970  514309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:06:33.931981  514309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:06:33.932040  514309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:06:33.932131  514309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:06:33.932241  514309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:06:33.942468  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:06:33.969719  514309 start.go:296] duration metric: took 176.590466ms for postStartSetup
	I1002 08:06:33.970123  514309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810803
	I1002 08:06:33.992779  514309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/config.json ...
	I1002 08:06:33.993071  514309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:06:33.993119  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:34.016736  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:34.120605  514309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:06:34.127915  514309 start.go:128] duration metric: took 12.938500079s to createHost
	I1002 08:06:34.127938  514309 start.go:83] releasing machines lock for "auto-810803", held for 12.938621376s
	I1002 08:06:34.128013  514309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810803
	I1002 08:06:34.149613  514309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:06:34.150564  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:34.150648  514309 ssh_runner.go:195] Run: cat /version.json
	I1002 08:06:34.152420  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:34.187527  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:34.187611  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:34.399717  514309 ssh_runner.go:195] Run: systemctl --version
	I1002 08:06:34.407801  514309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:06:34.462776  514309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:06:34.467674  514309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:06:34.467796  514309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:06:34.503810  514309 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 08:06:34.503881  514309 start.go:495] detecting cgroup driver to use...
	I1002 08:06:34.503944  514309 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:06:34.504051  514309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:06:34.528746  514309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:06:34.543789  514309 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:06:34.543926  514309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:06:34.562346  514309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:06:34.583364  514309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:06:34.732913  514309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:06:34.896247  514309 docker.go:234] disabling docker service ...
	I1002 08:06:34.896357  514309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:06:34.922730  514309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:06:34.940921  514309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:06:35.109624  514309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:06:35.264485  514309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:06:35.280201  514309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:06:35.295582  514309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:06:35.295647  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.305316  514309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:06:35.305387  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.314987  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.324058  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.333817  514309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:06:35.342653  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.352215  514309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.366389  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.375763  514309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:06:35.384588  514309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:06:35.393379  514309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:06:35.539848  514309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:06:35.923550  514309 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:06:35.923673  514309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:06:35.930484  514309 start.go:563] Will wait 60s for crictl version
	I1002 08:06:35.930651  514309 ssh_runner.go:195] Run: which crictl
	I1002 08:06:35.937848  514309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:06:35.978622  514309 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:06:35.978704  514309 ssh_runner.go:195] Run: crio --version
	I1002 08:06:36.012943  514309 ssh_runner.go:195] Run: crio --version
	I1002 08:06:36.057011  514309 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1002 08:06:34.341272  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:36.343895  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:38.345834  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:36.059964  514309 cli_runner.go:164] Run: docker network inspect auto-810803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:06:36.081451  514309 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 08:06:36.085527  514309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:06:36.100905  514309 kubeadm.go:883] updating cluster {Name:auto-810803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:06:36.101007  514309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:06:36.101070  514309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:06:36.145388  514309 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:06:36.145407  514309 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:06:36.145460  514309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:06:36.173505  514309 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:06:36.173580  514309 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:06:36.173602  514309 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 08:06:36.173719  514309 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-810803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:06:36.173838  514309 ssh_runner.go:195] Run: crio config
	I1002 08:06:36.242887  514309 cni.go:84] Creating CNI manager for ""
	I1002 08:06:36.242952  514309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:06:36.242981  514309 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:06:36.243036  514309 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-810803 NodeName:auto-810803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:06:36.243213  514309 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-810803"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:06:36.243321  514309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:06:36.252219  514309 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:06:36.252332  514309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:06:36.261559  514309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1002 08:06:36.276467  514309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:06:36.291197  514309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1002 08:06:36.307256  514309 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:06:36.311187  514309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:06:36.321616  514309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:06:36.480274  514309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:06:36.496384  514309 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803 for IP: 192.168.85.2
	I1002 08:06:36.496455  514309 certs.go:195] generating shared ca certs ...
	I1002 08:06:36.496486  514309 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:36.496663  514309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:06:36.496737  514309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:06:36.496777  514309 certs.go:257] generating profile certs ...
	I1002 08:06:36.496858  514309 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.key
	I1002 08:06:36.496895  514309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt with IP's: []
	I1002 08:06:37.232198  514309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt ...
	I1002 08:06:37.232230  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: {Name:mkea2e55c1e1ae8aecf9c1c8462a12f6c15e1737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:37.232427  514309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.key ...
	I1002 08:06:37.232445  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.key: {Name:mkfb4c7d19a8a0ace68a5273fd7f48046a8d5252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:37.232552  514309 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key.64edb8c6
	I1002 08:06:37.232573  514309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt.64edb8c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 08:06:37.798694  514309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt.64edb8c6 ...
	I1002 08:06:37.798727  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt.64edb8c6: {Name:mkf538a64679a31792ccc2e75ed53d24bfa09749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:37.798990  514309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key.64edb8c6 ...
	I1002 08:06:37.799013  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key.64edb8c6: {Name:mk12f0b15d1b4207ceb18248fc51f37b122ea6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:37.799136  514309 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt.64edb8c6 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt
	I1002 08:06:37.799229  514309 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key.64edb8c6 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key
	I1002 08:06:37.799294  514309 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.key
	I1002 08:06:37.799314  514309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.crt with IP's: []
	I1002 08:06:39.538739  514309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.crt ...
	I1002 08:06:39.538774  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.crt: {Name:mkdc94d432f1549d2e610bf2c7f17aabd64b281c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:39.538945  514309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.key ...
	I1002 08:06:39.538962  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.key: {Name:mk336416a341911b7f8763dd5dcda16c70e1a472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:39.539157  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:06:39.539200  514309 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:06:39.539214  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:06:39.539238  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:06:39.539264  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:06:39.539301  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:06:39.539349  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:06:39.540004  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:06:39.573029  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:06:39.600417  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:06:39.644639  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:06:39.679796  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 08:06:39.709222  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:06:39.727254  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:06:39.745552  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 08:06:39.763922  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:06:39.782532  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:06:39.801147  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:06:39.819691  514309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:06:39.832834  514309 ssh_runner.go:195] Run: openssl version
	I1002 08:06:39.839754  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:06:39.848684  514309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:06:39.853027  514309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:06:39.853096  514309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:06:39.909092  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:06:39.918557  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:06:39.930252  514309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:06:39.943389  514309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:06:39.943473  514309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:06:40.038797  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:06:40.049472  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:06:40.065496  514309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:06:40.075925  514309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:06:40.075999  514309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:06:40.122720  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:06:40.132352  514309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:06:40.137484  514309 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 08:06:40.137538  514309 kubeadm.go:400] StartCluster: {Name:auto-810803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:06:40.137621  514309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:06:40.137685  514309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:06:40.170605  514309 cri.go:89] found id: ""
	I1002 08:06:40.170690  514309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:06:40.181705  514309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 08:06:40.190749  514309 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 08:06:40.190821  514309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 08:06:40.202460  514309 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 08:06:40.202483  514309 kubeadm.go:157] found existing configuration files:
	
	I1002 08:06:40.202547  514309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 08:06:40.212487  514309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 08:06:40.212571  514309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 08:06:40.220806  514309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 08:06:40.230001  514309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 08:06:40.230089  514309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 08:06:40.238517  514309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 08:06:40.247823  514309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 08:06:40.247897  514309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 08:06:40.255813  514309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 08:06:40.264744  514309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 08:06:40.264814  514309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 08:06:40.272700  514309 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 08:06:40.324447  514309 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 08:06:40.324833  514309 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 08:06:40.355978  514309 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 08:06:40.356058  514309 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 08:06:40.356116  514309 kubeadm.go:318] OS: Linux
	I1002 08:06:40.356169  514309 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 08:06:40.356226  514309 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 08:06:40.356282  514309 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 08:06:40.356340  514309 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 08:06:40.356395  514309 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 08:06:40.356449  514309 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 08:06:40.356501  514309 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 08:06:40.356555  514309 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 08:06:40.356607  514309 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 08:06:40.464156  514309 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 08:06:40.464298  514309 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 08:06:40.464402  514309 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 08:06:40.479464  514309 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 08:06:40.484763  514309 out.go:252]   - Generating certificates and keys ...
	I1002 08:06:40.484884  514309 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 08:06:40.484978  514309 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1002 08:06:40.838434  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:42.838909  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:41.347611  514309 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 08:06:41.614514  514309 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 08:06:42.484925  514309 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 08:06:43.045653  514309 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 08:06:43.135982  514309 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 08:06:43.136545  514309 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-810803 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:06:43.665663  514309 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 08:06:43.666285  514309 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-810803 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:06:44.417444  514309 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 08:06:45.234236  514309 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	W1002 08:06:44.839706  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:46.841176  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:45.987163  514309 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 08:06:45.987513  514309 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 08:06:46.923675  514309 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 08:06:47.149077  514309 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 08:06:47.977991  514309 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 08:06:49.371990  514309 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 08:06:49.464883  514309 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 08:06:49.465573  514309 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 08:06:49.468364  514309 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 08:06:49.472105  514309 out.go:252]   - Booting up control plane ...
	I1002 08:06:49.472223  514309 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 08:06:49.472311  514309 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 08:06:49.472385  514309 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 08:06:49.487745  514309 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 08:06:49.487867  514309 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 08:06:49.495395  514309 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 08:06:49.496379  514309 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 08:06:49.496433  514309 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 08:06:49.644417  514309 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 08:06:49.644822  514309 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 08:06:50.650003  514309 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001199448s
	I1002 08:06:50.650145  514309 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 08:06:50.650247  514309 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 08:06:50.650346  514309 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 08:06:50.650439  514309 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1002 08:06:48.843970  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:51.337766  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:57.242103  514309 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.592236058s
	I1002 08:06:57.443496  514309 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.793812651s
	I1002 08:06:57.651183  514309 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001377976s
	I1002 08:06:57.670927  514309 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 08:06:57.685926  514309 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 08:06:57.700853  514309 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 08:06:57.701090  514309 kubeadm.go:318] [mark-control-plane] Marking the node auto-810803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 08:06:57.713614  514309 kubeadm.go:318] [bootstrap-token] Using token: rsphtv.zahzgr4n38b0kscw
	W1002 08:06:53.839366  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:56.341018  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:57.716520  514309 out.go:252]   - Configuring RBAC rules ...
	I1002 08:06:57.716661  514309 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 08:06:57.723628  514309 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 08:06:57.733981  514309 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 08:06:57.738196  514309 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 08:06:57.742649  514309 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 08:06:57.747276  514309 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 08:06:58.059114  514309 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 08:06:58.507231  514309 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 08:06:59.057893  514309 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 08:06:59.059191  514309 kubeadm.go:318] 
	I1002 08:06:59.059308  514309 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 08:06:59.059322  514309 kubeadm.go:318] 
	I1002 08:06:59.059405  514309 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 08:06:59.059410  514309 kubeadm.go:318] 
	I1002 08:06:59.059456  514309 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 08:06:59.059557  514309 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 08:06:59.059621  514309 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 08:06:59.059635  514309 kubeadm.go:318] 
	I1002 08:06:59.059691  514309 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 08:06:59.059707  514309 kubeadm.go:318] 
	I1002 08:06:59.059758  514309 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 08:06:59.059767  514309 kubeadm.go:318] 
	I1002 08:06:59.059821  514309 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 08:06:59.059905  514309 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 08:06:59.059983  514309 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 08:06:59.059993  514309 kubeadm.go:318] 
	I1002 08:06:59.060081  514309 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 08:06:59.060174  514309 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 08:06:59.060183  514309 kubeadm.go:318] 
	I1002 08:06:59.060270  514309 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token rsphtv.zahzgr4n38b0kscw \
	I1002 08:06:59.060380  514309 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 08:06:59.060405  514309 kubeadm.go:318] 	--control-plane 
	I1002 08:06:59.060413  514309 kubeadm.go:318] 
	I1002 08:06:59.060502  514309 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 08:06:59.060510  514309 kubeadm.go:318] 
	I1002 08:06:59.060595  514309 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token rsphtv.zahzgr4n38b0kscw \
	I1002 08:06:59.060704  514309 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 08:06:59.066079  514309 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 08:06:59.066346  514309 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 08:06:59.066465  514309 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 08:06:59.066492  514309 cni.go:84] Creating CNI manager for ""
	I1002 08:06:59.066502  514309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:06:59.071688  514309 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 08:06:59.074715  514309 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 08:06:59.078943  514309 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 08:06:59.078963  514309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 08:06:59.092899  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 08:06:59.400439  514309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 08:06:59.400545  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:06:59.400575  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-810803 minikube.k8s.io/updated_at=2025_10_02T08_06_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=auto-810803 minikube.k8s.io/primary=true
	I1002 08:06:59.427904  514309 ops.go:34] apiserver oom_adj: -16
	I1002 08:06:59.558225  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:00.058997  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:00.559287  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1002 08:06:58.840131  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:07:00.371154  511270 pod_ready.go:94] pod "coredns-66bc5c9577-cscrn" is "Ready"
	I1002 08:07:00.371248  511270 pod_ready.go:86] duration metric: took 32.539690981s for pod "coredns-66bc5c9577-cscrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.383728  511270 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.409064  511270 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417078" is "Ready"
	I1002 08:07:00.409148  511270 pod_ready.go:86] duration metric: took 25.389511ms for pod "etcd-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.482501  511270 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.488418  511270 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417078" is "Ready"
	I1002 08:07:00.488449  511270 pod_ready.go:86] duration metric: took 5.915336ms for pod "kube-apiserver-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.491476  511270 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.536605  511270 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417078" is "Ready"
	I1002 08:07:00.536634  511270 pod_ready.go:86] duration metric: took 45.126727ms for pod "kube-controller-manager-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.736843  511270 pod_ready.go:83] waiting for pod "kube-proxy-g6hc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:01.136824  511270 pod_ready.go:94] pod "kube-proxy-g6hc4" is "Ready"
	I1002 08:07:01.136852  511270 pod_ready.go:86] duration metric: took 399.978002ms for pod "kube-proxy-g6hc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:01.336100  511270 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:01.736772  511270 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417078" is "Ready"
	I1002 08:07:01.736802  511270 pod_ready.go:86] duration metric: took 400.616966ms for pod "kube-scheduler-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:01.736816  511270 pod_ready.go:40] duration metric: took 33.921023761s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:07:01.793866  511270 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:07:01.798913  511270 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417078" cluster and "default" namespace by default
	I1002 08:07:01.058621  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:01.559294  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:02.058568  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:02.559230  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:03.058610  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:03.558828  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:04.059222  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:04.559160  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:04.670461  514309 kubeadm.go:1113] duration metric: took 5.269978737s to wait for elevateKubeSystemPrivileges
	I1002 08:07:04.670495  514309 kubeadm.go:402] duration metric: took 24.532960537s to StartCluster
	I1002 08:07:04.670513  514309 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:07:04.670589  514309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:07:04.672625  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:07:04.673213  514309 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:07:04.675849  514309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 08:07:04.676188  514309 config.go:182] Loaded profile config "auto-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:07:04.676195  514309 out.go:179] * Verifying Kubernetes components...
	I1002 08:07:04.676599  514309 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:07:04.676696  514309 addons.go:69] Setting storage-provisioner=true in profile "auto-810803"
	I1002 08:07:04.676709  514309 addons.go:238] Setting addon storage-provisioner=true in "auto-810803"
	I1002 08:07:04.676737  514309 host.go:66] Checking if "auto-810803" exists ...
	I1002 08:07:04.677039  514309 addons.go:69] Setting default-storageclass=true in profile "auto-810803"
	I1002 08:07:04.677060  514309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-810803"
	I1002 08:07:04.677233  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:07:04.677383  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:07:04.680308  514309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:07:04.716131  514309 addons.go:238] Setting addon default-storageclass=true in "auto-810803"
	I1002 08:07:04.716172  514309 host.go:66] Checking if "auto-810803" exists ...
	I1002 08:07:04.716591  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:07:04.725841  514309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:07:04.728760  514309 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:07:04.728783  514309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:07:04.728851  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:07:04.766924  514309 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:07:04.766948  514309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:07:04.767019  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:07:04.784695  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:07:04.814160  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:07:04.946130  514309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 08:07:04.961674  514309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:07:05.019481  514309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:07:05.077307  514309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:07:05.497458  514309 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 08:07:05.499587  514309 node_ready.go:35] waiting up to 15m0s for node "auto-810803" to be "Ready" ...
	I1002 08:07:05.821907  514309 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 08:07:05.824759  514309 addons.go:514] duration metric: took 1.148148101s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 08:07:06.011097  514309 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-810803" context rescaled to 1 replicas
	W1002 08:07:07.502627  514309 node_ready.go:57] node "auto-810803" has "Ready":"False" status (will retry)
	W1002 08:07:09.502938  514309 node_ready.go:57] node "auto-810803" has "Ready":"False" status (will retry)
	W1002 08:07:11.506880  514309 node_ready.go:57] node "auto-810803" has "Ready":"False" status (will retry)
	W1002 08:07:14.008259  514309 node_ready.go:57] node "auto-810803" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.00347145Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.012108141Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.012149373Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.012173578Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.016029441Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.016068424Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.016094984Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.020087612Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.020124101Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.020150201Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.023936641Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.023973621Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.760247897Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=87379860-4028-425a-adeb-5bc5e14e6628 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.761525644Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a8e4ee1-9d28-4cda-874f-9cd9bc2de7a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.766425127Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t/dashboard-metrics-scraper" id=6ff1ef11-b3fd-43ba-ad64-64c461ca061b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.766705302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.776312439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.776841814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.795236003Z" level=info msg="Created container 89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t/dashboard-metrics-scraper" id=6ff1ef11-b3fd-43ba-ad64-64c461ca061b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.796873278Z" level=info msg="Starting container: 89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283" id=fd4cf0df-6150-49f9-a090-2c2800db2b47 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:07:06 default-k8s-diff-port-417078 conmon[1692]: conmon 89a092255a6551e8d029 <ninfo>: container 1694 exited with status 1
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.810225073Z" level=info msg="Started container" PID=1694 containerID=89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t/dashboard-metrics-scraper id=fd4cf0df-6150-49f9-a090-2c2800db2b47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8229060c645ab689f9fc104345fe6238ca4372a8e9894308d7b7018d8b4b063
	Oct 02 08:07:07 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:07.076086643Z" level=info msg="Removing container: 2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff" id=9c7822f4-3c0a-495a-a6a0-12d8f2823ca3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:07:07 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:07.086889303Z" level=info msg="Error loading conmon cgroup of container 2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff: cgroup deleted" id=9c7822f4-3c0a-495a-a6a0-12d8f2823ca3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:07:07 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:07.092026442Z" level=info msg="Removed container 2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t/dashboard-metrics-scraper" id=9c7822f4-3c0a-495a-a6a0-12d8f2823ca3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	89a092255a655       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   2                   d8229060c645a       dashboard-metrics-scraper-6ffb444bf9-wrn9t             kubernetes-dashboard
	60656c47cfe1b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   23095c8361628       storage-provisioner                                    kube-system
	3424e6b891d1d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago      Running             kubernetes-dashboard        0                   d1f748f8546e8       kubernetes-dashboard-855c9754f9-zm2mb                  kubernetes-dashboard
	7857d6a2c27ee       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   1b7eace8b394b       coredns-66bc5c9577-cscrn                               kube-system
	1c2b537ef32d1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   03ecb93eb54f2       kube-proxy-g6hc4                                       kube-system
	3e3390ef7a71e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   23095c8361628       storage-provisioner                                    kube-system
	5e20db31d5509       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   06c396ee8301f       kindnet-xvmxj                                          kube-system
	8df20169712b9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   6fdfe172c56df       busybox                                                default
	58e9ec4d18140       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   a395b5248ad0c       etcd-default-k8s-diff-port-417078                      kube-system
	3fe23dab4fa0b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   f984eb2c0b35a       kube-controller-manager-default-k8s-diff-port-417078   kube-system
	51204ad2326f2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   edddaf4b2aa6e       kube-scheduler-default-k8s-diff-port-417078            kube-system
	3a01e925d0339       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   51f94073835c2       kube-apiserver-default-k8s-diff-port-417078            kube-system
	
	
	==> coredns [7857d6a2c27eedcc3e1e3425fc86feebd1ed00455b0b25e76849e78058d175a8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43124 - 46038 "HINFO IN 5148158578930309505.2170458187798132499. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018081981s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-417078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-417078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=default-k8s-diff-port-417078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_04_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:04:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-417078
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:07:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:06:54 +0000   Thu, 02 Oct 2025 08:04:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:06:54 +0000   Thu, 02 Oct 2025 08:04:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:06:54 +0000   Thu, 02 Oct 2025 08:04:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:06:54 +0000   Thu, 02 Oct 2025 08:05:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-417078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2a4ec1c6a264eebaea9df903962cb0c
	  System UUID:                f4fac9d3-943a-43ee-b70b-67637923d71e
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-cscrn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 etcd-default-k8s-diff-port-417078                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-xvmxj                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-417078             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-417078    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-g6hc4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-default-k8s-diff-port-417078             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wrn9t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zm2mb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m15s                  kube-proxy       
	  Normal   Starting                 49s                    kube-proxy       
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m17s                  node-controller  Node default-k8s-diff-port-417078 event: Registered Node default-k8s-diff-port-417078 in Controller
	  Normal   NodeReady                95s                    kubelet          Node default-k8s-diff-port-417078 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-417078 event: Registered Node default-k8s-diff-port-417078 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:05] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:06] overlayfs: idmapped layers are currently not supported
	[ +14.824071] overlayfs: idmapped layers are currently not supported
	[ +33.610286] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [58e9ec4d181400c19075bad03bd7c590fa61e2f6e890fe6423d6ab1e2a40928d] <==
	{"level":"warn","ts":"2025-10-02T08:06:22.209404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.253192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.265221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.289997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.316419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.354850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.383992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.421293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.448297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.461679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.489386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.513976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.579179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.580023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.608187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.775454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41952","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T08:06:26.447319Z","caller":"traceutil/trace.go:172","msg":"trace[1904468523] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"112.252057ms","start":"2025-10-02T08:06:26.335051Z","end":"2025-10-02T08:06:26.447303Z","steps":["trace[1904468523] 'process raft request'  (duration: 72.197107ms)","trace[1904468523] 'compare'  (duration: 39.793352ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T08:06:26.447583Z","caller":"traceutil/trace.go:172","msg":"trace[1959837794] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"112.427288ms","start":"2025-10-02T08:06:26.335146Z","end":"2025-10-02T08:06:26.447573Z","steps":["trace[1959837794] 'process raft request'  (duration: 112.015928ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T08:06:26.607999Z","caller":"traceutil/trace.go:172","msg":"trace[846315445] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"108.541811ms","start":"2025-10-02T08:06:26.499441Z","end":"2025-10-02T08:06:26.607982Z","steps":["trace[846315445] 'process raft request'  (duration: 62.242224ms)","trace[846315445] 'compare'  (duration: 46.051396ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T08:06:26.608196Z","caller":"traceutil/trace.go:172","msg":"trace[614259474] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"108.564721ms","start":"2025-10-02T08:06:26.499625Z","end":"2025-10-02T08:06:26.608190Z","steps":["trace[614259474] 'process raft request'  (duration: 108.191294ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T08:06:26.764790Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.351948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T08:06:26.764952Z","caller":"traceutil/trace.go:172","msg":"trace[1200747866] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:551; }","duration":"104.527596ms","start":"2025-10-02T08:06:26.660409Z","end":"2025-10-02T08:06:26.764937Z","steps":["trace[1200747866] 'agreement among raft nodes before linearized reading'  (duration: 66.617018ms)","trace[1200747866] 'range keys from in-memory index tree'  (duration: 37.716197ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T08:06:26.764892Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.634342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:ttl-after-finished-controller\" limit:1 ","response":"range_response_count:1 size:695"}
	{"level":"info","ts":"2025-10-02T08:06:26.765442Z","caller":"traceutil/trace.go:172","msg":"trace[834674283] range","detail":"{range_begin:/registry/clusterroles/system:controller:ttl-after-finished-controller; range_end:; response_count:1; response_revision:551; }","duration":"117.1883ms","start":"2025-10-02T08:06:26.648241Z","end":"2025-10-02T08:06:26.765429Z","steps":["trace[834674283] 'agreement among raft nodes before linearized reading'  (duration: 78.791776ms)","trace[834674283] 'range keys from in-memory index tree'  (duration: 37.779328ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T08:06:26.765314Z","caller":"traceutil/trace.go:172","msg":"trace[756088777] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"116.837249ms","start":"2025-10-02T08:06:26.648465Z","end":"2025-10-02T08:06:26.765302Z","steps":["trace[756088777] 'process raft request'  (duration: 78.591816ms)","trace[756088777] 'compare'  (duration: 37.677969ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:07:16 up  2:49,  0 user,  load average: 3.71, 3.35, 2.47
	Linux default-k8s-diff-port-417078 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e20db31d550901a0af4d1d01bbd43e4c4e376a5f51d16b6befe7b4fd80f53fc] <==
	I1002 08:06:25.803750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:06:25.803988       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 08:06:25.804117       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:06:25.804130       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:06:25.804140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:06:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:06:26.000728       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:06:26.000758       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:06:26.000767       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:06:26.001116       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:06:56.001205       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 08:06:56.001211       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 08:06:56.001356       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 08:06:56.001506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 08:06:57.601082       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:06:57.601216       1 metrics.go:72] Registering metrics
	I1002 08:06:57.601335       1 controller.go:711] "Syncing nftables rules"
	I1002 08:07:06.002974       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:07:06.003061       1 main.go:301] handling current node
	I1002 08:07:16.004393       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:07:16.004441       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3a01e925d0339bb867ed641377431c1c576bffc854679e92eb2e19a036a34feb] <==
	I1002 08:06:24.304378       1 cache.go:39] Caches are synced for autoregister controller
	I1002 08:06:24.307254       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 08:06:24.309114       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 08:06:24.309133       1 policy_source.go:240] refreshing policies
	I1002 08:06:24.326216       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:06:24.334908       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 08:06:24.334932       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 08:06:24.343714       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 08:06:24.343831       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 08:06:24.344026       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 08:06:24.344084       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 08:06:24.360092       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:06:24.438834       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1002 08:06:24.555531       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 08:06:24.785607       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:06:24.997974       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:06:26.174646       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:06:26.633700       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:06:26.891748       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:06:26.948300       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:06:27.095316       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.119.123"}
	I1002 08:06:27.126740       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.144.43"}
	I1002 08:06:28.950092       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:06:29.044555       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 08:06:29.195757       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3fe23dab4fa0ba272028a64c70d3af8948cb437fb69796d50bf0133f85d526af] <==
	I1002 08:06:28.900279       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 08:06:28.907635       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 08:06:28.907679       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 08:06:28.907704       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 08:06:28.909394       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 08:06:28.909430       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:06:28.915185       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 08:06:28.915278       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 08:06:28.915310       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 08:06:28.915431       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 08:06:28.918961       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 08:06:28.919383       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 08:06:28.927353       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 08:06:28.935865       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 08:06:28.936430       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 08:06:28.937393       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 08:06:28.937455       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 08:06:28.937558       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 08:06:28.942223       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 08:06:28.943707       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 08:06:28.945608       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 08:06:28.979758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:06:29.006353       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:06:29.006381       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 08:06:29.006389       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1c2b537ef32d116dc218025592702865324dd99cf3c1c074eda8168c73deb8fb] <==
	I1002 08:06:26.218782       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:06:26.387751       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:06:26.488031       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:06:26.488636       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 08:06:26.488738       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:06:26.688430       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:06:26.688489       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:06:26.692996       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:06:26.693524       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:06:26.693545       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:06:26.698043       1 config.go:200] "Starting service config controller"
	I1002 08:06:26.698064       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:06:26.698086       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:06:26.698090       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:06:26.698107       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:06:26.698112       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:06:26.707870       1 config.go:309] "Starting node config controller"
	I1002 08:06:26.707903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:06:26.707914       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:06:26.799612       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:06:26.799731       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:06:26.799772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [51204ad2326f23863feeb5f81eec088fffc09135e7fccfb05c306b274a31f295] <==
	I1002 08:06:23.847580       1 serving.go:386] Generated self-signed cert in-memory
	I1002 08:06:26.793379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 08:06:26.793413       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:06:26.837726       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 08:06:26.837767       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 08:06:26.837821       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:26.837837       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:26.837860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:26.837875       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:26.843522       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:06:26.843486       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:06:26.938337       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:26.938416       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 08:06:26.938512       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 08:06:30 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:30.631114     777 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/60d8096f-e9e3-4d0e-8f16-67ab47b4563e-kube-api-access-pvrv9 podName:60d8096f-e9e3-4d0e-8f16-67ab47b4563e nodeName:}" failed. No retries permitted until 2025-10-02 08:06:31.131065744 +0000 UTC m=+14.692866295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pvrv9" (UniqueName: "kubernetes.io/projected/60d8096f-e9e3-4d0e-8f16-67ab47b4563e-kube-api-access-pvrv9") pod "kubernetes-dashboard-855c9754f9-zm2mb" (UID: "60d8096f-e9e3-4d0e-8f16-67ab47b4563e") : failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:06:30 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:30.633039     777 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:06:30 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:30.633185     777 projected.go:196] Error preparing data for projected volume kube-api-access-dzgwt for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:06:30 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:30.633283     777 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34-kube-api-access-dzgwt podName:0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34 nodeName:}" failed. No retries permitted until 2025-10-02 08:06:31.133260251 +0000 UTC m=+14.695060801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzgwt" (UniqueName: "kubernetes.io/projected/0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34-kube-api-access-dzgwt") pod "dashboard-metrics-scraper-6ffb444bf9-wrn9t" (UID: "0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34") : failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:06:31 default-k8s-diff-port-417078 kubelet[777]: W1002 08:06:31.497864     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/crio-d1f748f8546e88eb6498f25a864ece614cd728f04c4f6b5aaa44471834f291e6 WatchSource:0}: Error finding container d1f748f8546e88eb6498f25a864ece614cd728f04c4f6b5aaa44471834f291e6: Status 404 returned error can't find the container with id d1f748f8546e88eb6498f25a864ece614cd728f04c4f6b5aaa44471834f291e6
	Oct 02 08:06:31 default-k8s-diff-port-417078 kubelet[777]: W1002 08:06:31.525419     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/crio-d8229060c645ab689f9fc104345fe6238ca4372a8e9894308d7b7018d8b4b063 WatchSource:0}: Error finding container d8229060c645ab689f9fc104345fe6238ca4372a8e9894308d7b7018d8b4b063: Status 404 returned error can't find the container with id d8229060c645ab689f9fc104345fe6238ca4372a8e9894308d7b7018d8b4b063
	Oct 02 08:06:39 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:39.046356     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zm2mb" podStartSLOduration=3.282872643 podStartE2EDuration="10.04633666s" podCreationTimestamp="2025-10-02 08:06:29 +0000 UTC" firstStartedPulling="2025-10-02 08:06:31.502452403 +0000 UTC m=+15.064252954" lastFinishedPulling="2025-10-02 08:06:38.26591642 +0000 UTC m=+21.827716971" observedRunningTime="2025-10-02 08:06:39.045963036 +0000 UTC m=+22.607763611" watchObservedRunningTime="2025-10-02 08:06:39.04633666 +0000 UTC m=+22.608137210"
	Oct 02 08:06:46 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:46.009074     777 scope.go:117] "RemoveContainer" containerID="ec70030b03597ebd0d38458a23a7978b766dc8556e2d715600829de5014c2d04"
	Oct 02 08:06:47 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:47.013021     777 scope.go:117] "RemoveContainer" containerID="ec70030b03597ebd0d38458a23a7978b766dc8556e2d715600829de5014c2d04"
	Oct 02 08:06:47 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:47.013336     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:06:47 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:47.013479     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:06:48 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:48.018293     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:06:48 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:48.019007     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:06:51 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:51.500936     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:06:51 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:51.501143     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:06:57 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:57.042654     777 scope.go:117] "RemoveContainer" containerID="3e3390ef7a71ec7064e94b1c428bc44ed214876f28e31ea3bc944aab82217db4"
	Oct 02 08:07:06 default-k8s-diff-port-417078 kubelet[777]: I1002 08:07:06.759828     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:07:07 default-k8s-diff-port-417078 kubelet[777]: I1002 08:07:07.073492     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:07:07 default-k8s-diff-port-417078 kubelet[777]: I1002 08:07:07.073847     777 scope.go:117] "RemoveContainer" containerID="89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283"
	Oct 02 08:07:07 default-k8s-diff-port-417078 kubelet[777]: E1002 08:07:07.074100     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:07:11 default-k8s-diff-port-417078 kubelet[777]: I1002 08:07:11.501250     777 scope.go:117] "RemoveContainer" containerID="89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283"
	Oct 02 08:07:11 default-k8s-diff-port-417078 kubelet[777]: E1002 08:07:11.501947     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:07:14 default-k8s-diff-port-417078 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:07:14 default-k8s-diff-port-417078 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:07:14 default-k8s-diff-port-417078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [3424e6b891d1d444a5fd9113b3934912df22aa8b2559334195df2b60a5decea2] <==
	2025/10/02 08:06:38 Starting overwatch
	2025/10/02 08:06:38 Using namespace: kubernetes-dashboard
	2025/10/02 08:06:38 Using in-cluster config to connect to apiserver
	2025/10/02 08:06:38 Using secret token for csrf signing
	2025/10/02 08:06:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 08:06:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 08:06:38 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 08:06:38 Generating JWE encryption key
	2025/10/02 08:06:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 08:06:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 08:06:40 Initializing JWE encryption key from synchronized object
	2025/10/02 08:06:40 Creating in-cluster Sidecar client
	2025/10/02 08:06:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:06:40 Serving insecurely on HTTP port: 9090
	2025/10/02 08:07:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3e3390ef7a71ec7064e94b1c428bc44ed214876f28e31ea3bc944aab82217db4] <==
	I1002 08:06:26.147496       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 08:06:56.149703       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [60656c47cfe1b3b0b174507dbed097964a91a1226d4508163960b2e21510a0fe] <==
	I1002 08:06:57.131445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 08:06:57.172598       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:06:57.173226       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 08:06:57.175669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:00.630748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:04.896072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:08.495465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:11.548872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:14.570843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:14.578662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:07:14.578802       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:07:14.579005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417078_865da2d9-fe60-4786-9791-4e1237283d1f!
	I1002 08:07:14.580117       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3841bb3-24e2-47d7-9ba0-774032dd0ed1", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-417078_865da2d9-fe60-4786-9791-4e1237283d1f became leader
	W1002 08:07:14.589286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:14.601182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:07:14.679956       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417078_865da2d9-fe60-4786-9791-4e1237283d1f!
	W1002 08:07:16.604874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:16.612456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078: exit status 2 (408.964203ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-417078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-417078
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-417078:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba",
	        "Created": "2025-10-02T08:04:28.399453084Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 511456,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T08:06:08.82098825Z",
	            "FinishedAt": "2025-10-02T08:06:07.611013554Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/hosts",
	        "LogPath": "/var/lib/docker/containers/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba-json.log",
	        "Name": "/default-k8s-diff-port-417078",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-417078:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-417078",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba",
	                "LowerDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0-init/diff:/var/lib/docker/overlay2/351964ba6fa083af33beecbc6598b3b0b173af42008b0dfb1e7467a52b54316d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ca735e4bdb118c286be480b4f12dd3f904411128e2680db9b5f872634cd93c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-417078",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-417078/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-417078",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-417078",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-417078",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b505d990c2bfb9da36ccae88f4562aca24d2baeb18a5ce7d7e0e80cfe0597021",
	            "SandboxKey": "/var/run/docker/netns/b505d990c2bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-417078": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:9a:f4:17:64:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d1780ea11813add7386f7a8e327ace3f3a59d3c8ad3cf5599ed166ee793fe5a6",
	                    "EndpointID": "c1f2b8b72d37e2ae07cb2ee1b6a1ec68f4ac0c82fa34cc2d8f1dcaa4780ab38d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-417078",
	                        "9b8a295e3342"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078: exit status 2 (344.095763ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-417078 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-417078 logs -n 25: (1.327328255s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-604182 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p no-preload-604182                                                                                                                                                                                                                          │ no-preload-604182            │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ delete  │ -p disable-driver-mounts-466206                                                                                                                                                                                                               │ disable-driver-mounts-466206 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:05 UTC │
	│ image   │ embed-certs-171347 image list --format=json                                                                                                                                                                                                   │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │ 02 Oct 25 08:04 UTC │
	│ pause   │ -p embed-certs-171347 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:04 UTC │                     │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ delete  │ -p embed-certs-171347                                                                                                                                                                                                                         │ embed-certs-171347           │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable metrics-server -p newest-cni-009374 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-417078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │                     │
	│ stop    │ -p newest-cni-009374 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ addons  │ enable dashboard -p newest-cni-009374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:05 UTC │
	│ start   │ -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:06 UTC │
	│ stop    │ -p default-k8s-diff-port-417078 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:05 UTC │ 02 Oct 25 08:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-417078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ start   │ -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:07 UTC │
	│ image   │ newest-cni-009374 image list --format=json                                                                                                                                                                                                    │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ pause   │ -p newest-cni-009374 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │                     │
	│ delete  │ -p newest-cni-009374                                                                                                                                                                                                                          │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ delete  │ -p newest-cni-009374                                                                                                                                                                                                                          │ newest-cni-009374            │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │ 02 Oct 25 08:06 UTC │
	│ start   │ -p auto-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-810803                  │ jenkins │ v1.37.0 │ 02 Oct 25 08:06 UTC │                     │
	│ image   │ default-k8s-diff-port-417078 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:07 UTC │ 02 Oct 25 08:07 UTC │
	│ pause   │ -p default-k8s-diff-port-417078 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-417078 │ jenkins │ v1.37.0 │ 02 Oct 25 08:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 08:06:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 08:06:20.837857  514309 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:06:20.838096  514309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:20.838126  514309 out.go:374] Setting ErrFile to fd 2...
	I1002 08:06:20.838145  514309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:06:20.838442  514309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 08:06:20.838914  514309 out.go:368] Setting JSON to false
	I1002 08:06:20.839956  514309 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10132,"bootTime":1759382249,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 08:06:20.840053  514309 start.go:140] virtualization:  
	I1002 08:06:20.844126  514309 out.go:179] * [auto-810803] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:06:20.848552  514309 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:06:20.848626  514309 notify.go:220] Checking for updates...
	I1002 08:06:20.855019  514309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:06:20.858166  514309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:06:20.861105  514309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 08:06:20.864039  514309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:06:20.866931  514309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:06:20.870294  514309 config.go:182] Loaded profile config "default-k8s-diff-port-417078": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:06:20.870393  514309 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:06:20.915774  514309 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:06:20.915896  514309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:06:21.033549  514309 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:06:21.02286989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:06:21.033656  514309 docker.go:318] overlay module found
	I1002 08:06:21.036826  514309 out.go:179] * Using the docker driver based on user configuration
	I1002 08:06:21.039672  514309 start.go:304] selected driver: docker
	I1002 08:06:21.039692  514309 start.go:924] validating driver "docker" against <nil>
	I1002 08:06:21.039706  514309 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:06:21.040440  514309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:06:21.139480  514309 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 08:06:21.129471336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:06:21.139633  514309 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 08:06:21.139862  514309 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:06:21.142828  514309 out.go:179] * Using Docker driver with root privileges
	I1002 08:06:21.145632  514309 cni.go:84] Creating CNI manager for ""
	I1002 08:06:21.145711  514309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:06:21.145721  514309 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 08:06:21.145806  514309 start.go:348] cluster config:
	{Name:auto-810803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1002 08:06:21.148954  514309 out.go:179] * Starting "auto-810803" primary control-plane node in "auto-810803" cluster
	I1002 08:06:21.151810  514309 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 08:06:21.154755  514309 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 08:06:21.157637  514309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:06:21.157707  514309 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 08:06:21.157716  514309 cache.go:58] Caching tarball of preloaded images
	I1002 08:06:21.157806  514309 preload.go:233] Found /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 08:06:21.157814  514309 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 08:06:21.157921  514309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/config.json ...
	I1002 08:06:21.157938  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/config.json: {Name:mka66e6efdbcad76fc2b29a7977775d2fbacd1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:21.158116  514309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 08:06:21.189144  514309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 08:06:21.189164  514309 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 08:06:21.189184  514309 cache.go:232] Successfully downloaded all kic artifacts
	I1002 08:06:21.189207  514309 start.go:360] acquireMachinesLock for auto-810803: {Name:mk08df67a7e417b0dfa95a73d23b98c7c3ff0065 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 08:06:21.189308  514309 start.go:364] duration metric: took 85.03µs to acquireMachinesLock for "auto-810803"
	I1002 08:06:21.189333  514309 start.go:93] Provisioning new machine with config: &{Name:auto-810803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:06:21.189398  514309 start.go:125] createHost starting for "" (driver="docker")
	I1002 08:06:18.474413  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 08:06:18.474436  511270 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 08:06:18.492131  511270 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:06:18.507973  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 08:06:18.507993  511270 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 08:06:18.612766  511270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:06:18.643325  511270 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-417078" to be "Ready" ...
	I1002 08:06:18.667715  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 08:06:18.667780  511270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 08:06:18.676235  511270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:06:18.727168  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 08:06:18.727244  511270 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 08:06:18.899989  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 08:06:18.900015  511270 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 08:06:19.044775  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 08:06:19.044801  511270 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 08:06:19.080590  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 08:06:19.080617  511270 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 08:06:19.103731  511270 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:06:19.103762  511270 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 08:06:19.124689  511270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 08:06:21.192947  514309 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 08:06:21.193184  514309 start.go:159] libmachine.API.Create for "auto-810803" (driver="docker")
	I1002 08:06:21.193239  514309 client.go:168] LocalClient.Create starting
	I1002 08:06:21.193308  514309 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem
	I1002 08:06:21.193341  514309 main.go:141] libmachine: Decoding PEM data...
	I1002 08:06:21.193357  514309 main.go:141] libmachine: Parsing certificate...
	I1002 08:06:21.193410  514309 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem
	I1002 08:06:21.193429  514309 main.go:141] libmachine: Decoding PEM data...
	I1002 08:06:21.193438  514309 main.go:141] libmachine: Parsing certificate...
	I1002 08:06:21.193792  514309 cli_runner.go:164] Run: docker network inspect auto-810803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 08:06:21.225942  514309 cli_runner.go:211] docker network inspect auto-810803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 08:06:21.226039  514309 network_create.go:284] running [docker network inspect auto-810803] to gather additional debugging logs...
	I1002 08:06:21.226056  514309 cli_runner.go:164] Run: docker network inspect auto-810803
	W1002 08:06:21.276914  514309 cli_runner.go:211] docker network inspect auto-810803 returned with exit code 1
	I1002 08:06:21.276941  514309 network_create.go:287] error running [docker network inspect auto-810803]: docker network inspect auto-810803: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-810803 not found
	I1002 08:06:21.276953  514309 network_create.go:289] output of [docker network inspect auto-810803]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-810803 not found
	
	** /stderr **
	I1002 08:06:21.277054  514309 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:06:21.302297  514309 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
	I1002 08:06:21.302674  514309 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-560172b9232e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:9f:ec:fb:3f:87} reservation:<nil>}
	I1002 08:06:21.302819  514309 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eae6334e56d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:6a:a0:79:3a:d9} reservation:<nil>}
	I1002 08:06:21.303120  514309 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d1780ea11813 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:84:d7:de:73:b2} reservation:<nil>}
	I1002 08:06:21.303542  514309 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a3ec0}
	I1002 08:06:21.303559  514309 network_create.go:124] attempt to create docker network auto-810803 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 08:06:21.303621  514309 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-810803 auto-810803
	I1002 08:06:21.386427  514309 network_create.go:108] docker network auto-810803 192.168.85.0/24 created
	I1002 08:06:21.386475  514309 kic.go:121] calculated static IP "192.168.85.2" for the "auto-810803" container
	I1002 08:06:21.386551  514309 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 08:06:21.417854  514309 cli_runner.go:164] Run: docker volume create auto-810803 --label name.minikube.sigs.k8s.io=auto-810803 --label created_by.minikube.sigs.k8s.io=true
	I1002 08:06:21.448673  514309 oci.go:103] Successfully created a docker volume auto-810803
	I1002 08:06:21.448756  514309 cli_runner.go:164] Run: docker run --rm --name auto-810803-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-810803 --entrypoint /usr/bin/test -v auto-810803:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 08:06:22.193525  514309 oci.go:107] Successfully prepared a docker volume auto-810803
	I1002 08:06:22.193581  514309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:06:22.193601  514309 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 08:06:22.193667  514309 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-810803:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 08:06:24.237947  511270 node_ready.go:49] node "default-k8s-diff-port-417078" is "Ready"
	I1002 08:06:24.237974  511270 node_ready.go:38] duration metric: took 5.594615436s for node "default-k8s-diff-port-417078" to be "Ready" ...
	I1002 08:06:24.237991  511270 api_server.go:52] waiting for apiserver process to appear ...
	I1002 08:06:24.238068  511270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 08:06:26.708164  511270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.09536047s)
	I1002 08:06:26.708249  511270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.031945787s)
	I1002 08:06:27.142224  511270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.017491902s)
	I1002 08:06:27.142404  511270 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.904324053s)
	I1002 08:06:27.142425  511270 api_server.go:72] duration metric: took 9.157351893s to wait for apiserver process to appear ...
	I1002 08:06:27.142432  511270 api_server.go:88] waiting for apiserver healthz status ...
	I1002 08:06:27.142451  511270 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 08:06:27.145275  511270 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-417078 addons enable metrics-server
	
	I1002 08:06:27.148220  511270 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 08:06:27.151244  511270 addons.go:514] duration metric: took 9.165700869s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 08:06:27.160318  511270 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 08:06:27.160350  511270 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 08:06:27.642560  511270 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 08:06:27.671499  511270 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1002 08:06:27.676932  511270 api_server.go:141] control plane version: v1.34.1
	I1002 08:06:27.676992  511270 api_server.go:131] duration metric: took 534.537801ms to wait for apiserver health ...
	I1002 08:06:27.677003  511270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 08:06:27.704088  511270 system_pods.go:59] 8 kube-system pods found
	I1002 08:06:27.704123  511270 system_pods.go:61] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:06:27.704139  511270 system_pods.go:61] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:06:27.704147  511270 system_pods.go:61] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:06:27.704154  511270 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:06:27.704162  511270 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:06:27.704176  511270 system_pods.go:61] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:06:27.704184  511270 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:06:27.704198  511270 system_pods.go:61] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Running
	I1002 08:06:27.704209  511270 system_pods.go:74] duration metric: took 27.199291ms to wait for pod list to return data ...
	I1002 08:06:27.704218  511270 default_sa.go:34] waiting for default service account to be created ...
	I1002 08:06:27.714516  511270 default_sa.go:45] found service account: "default"
	I1002 08:06:27.714555  511270 default_sa.go:55] duration metric: took 10.313408ms for default service account to be created ...
	I1002 08:06:27.714573  511270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 08:06:27.723140  511270 system_pods.go:86] 8 kube-system pods found
	I1002 08:06:27.723181  511270 system_pods.go:89] "coredns-66bc5c9577-cscrn" [f16e8634-2bad-477e-8a6a-125d5982309c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 08:06:27.723193  511270 system_pods.go:89] "etcd-default-k8s-diff-port-417078" [42031abb-d4f1-402f-ab56-84febc04510b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 08:06:27.723199  511270 system_pods.go:89] "kindnet-xvmxj" [8150ddc1-f400-422d-a0a6-3a42c58bec39] Running
	I1002 08:06:27.723206  511270 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-417078" [a873c14b-9486-43dc-ae23-14e8295d0848] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 08:06:27.723225  511270 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-417078" [da19df7e-eaba-494d-8b1b-34d66627a3ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 08:06:27.723235  511270 system_pods.go:89] "kube-proxy-g6hc4" [63b17498-7dca-45ba-81a8-4aa33302a8df] Running
	I1002 08:06:27.723242  511270 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-417078" [ddfd8f2d-83ca-4e3c-98b3-c3a4ea103ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 08:06:27.723328  511270 system_pods.go:89] "storage-provisioner" [12bac59c-b28d-4401-8b03-fb5742196ee4] Running
	I1002 08:06:27.723378  511270 system_pods.go:126] duration metric: took 8.798145ms to wait for k8s-apps to be running ...
	I1002 08:06:27.723389  511270 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 08:06:27.723443  511270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 08:06:27.795508  511270 system_svc.go:56] duration metric: took 72.106997ms WaitForService to wait for kubelet
	I1002 08:06:27.795612  511270 kubeadm.go:586] duration metric: took 9.810535295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 08:06:27.795641  511270 node_conditions.go:102] verifying NodePressure condition ...
	I1002 08:06:27.803936  511270 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 08:06:27.803968  511270 node_conditions.go:123] node cpu capacity is 2
	I1002 08:06:27.803982  511270 node_conditions.go:105] duration metric: took 8.333583ms to run NodePressure ...
	I1002 08:06:27.803995  511270 start.go:241] waiting for startup goroutines ...
	I1002 08:06:27.804003  511270 start.go:246] waiting for cluster config update ...
	I1002 08:06:27.804013  511270 start.go:255] writing updated cluster config ...
	I1002 08:06:27.804283  511270 ssh_runner.go:195] Run: rm -f paused
	I1002 08:06:27.815749  511270 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:06:27.831532  511270 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cscrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:06:26.911147  514309 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-810803:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.717439615s)
	I1002 08:06:26.911180  514309 kic.go:203] duration metric: took 4.717575739s to extract preloaded images to volume ...
	W1002 08:06:26.911328  514309 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 08:06:26.911474  514309 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 08:06:27.021628  514309 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-810803 --name auto-810803 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-810803 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-810803 --network auto-810803 --ip 192.168.85.2 --volume auto-810803:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 08:06:27.395057  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Running}}
	I1002 08:06:27.427966  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:06:27.465068  514309 cli_runner.go:164] Run: docker exec auto-810803 stat /var/lib/dpkg/alternatives/iptables
	I1002 08:06:27.521709  514309 oci.go:144] the created container "auto-810803" has a running status.
	I1002 08:06:27.521748  514309 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa...
	I1002 08:06:28.811595  514309 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 08:06:28.831268  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:06:28.848908  514309 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 08:06:28.848932  514309 kic_runner.go:114] Args: [docker exec --privileged auto-810803 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 08:06:28.903331  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:06:28.927749  514309 machine.go:93] provisionDockerMachine start ...
	I1002 08:06:28.927858  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:28.962598  514309 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:28.962955  514309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1002 08:06:28.962972  514309 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 08:06:28.963533  514309 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47398->127.0.0.1:33448: read: connection reset by peer
	W1002 08:06:29.837135  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:31.839937  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:32.115586  514309 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-810803
	
	I1002 08:06:32.115650  514309 ubuntu.go:182] provisioning hostname "auto-810803"
	I1002 08:06:32.115747  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:32.139295  514309 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:32.139595  514309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1002 08:06:32.139612  514309 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-810803 && echo "auto-810803" | sudo tee /etc/hostname
	I1002 08:06:32.312049  514309 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-810803
	
	I1002 08:06:32.312141  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:32.336439  514309 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:32.336753  514309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1002 08:06:32.336778  514309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-810803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-810803/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-810803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 08:06:32.475490  514309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 08:06:32.475523  514309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-292504/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-292504/.minikube}
	I1002 08:06:32.475546  514309 ubuntu.go:190] setting up certificates
	I1002 08:06:32.475558  514309 provision.go:84] configureAuth start
	I1002 08:06:32.475626  514309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810803
	I1002 08:06:32.504378  514309 provision.go:143] copyHostCerts
	I1002 08:06:32.504447  514309 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem, removing ...
	I1002 08:06:32.504456  514309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem
	I1002 08:06:32.504550  514309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/ca.pem (1082 bytes)
	I1002 08:06:32.504639  514309 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem, removing ...
	I1002 08:06:32.504645  514309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem
	I1002 08:06:32.504670  514309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/cert.pem (1123 bytes)
	I1002 08:06:32.504740  514309 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem, removing ...
	I1002 08:06:32.504745  514309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem
	I1002 08:06:32.504768  514309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-292504/.minikube/key.pem (1675 bytes)
	I1002 08:06:32.504829  514309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem org=jenkins.auto-810803 san=[127.0.0.1 192.168.85.2 auto-810803 localhost minikube]
	I1002 08:06:33.233209  514309 provision.go:177] copyRemoteCerts
	I1002 08:06:33.233289  514309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 08:06:33.233341  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:33.268751  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:33.373424  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 08:06:33.398568  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 08:06:33.426812  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 08:06:33.453113  514309 provision.go:87] duration metric: took 977.531018ms to configureAuth
	I1002 08:06:33.453142  514309 ubuntu.go:206] setting minikube options for container-runtime
	I1002 08:06:33.453330  514309 config.go:182] Loaded profile config "auto-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:06:33.453442  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:33.472731  514309 main.go:141] libmachine: Using SSH client type: native
	I1002 08:06:33.473045  514309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1002 08:06:33.473067  514309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 08:06:33.793019  514309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 08:06:33.793047  514309 machine.go:96] duration metric: took 4.865276876s to provisionDockerMachine
	I1002 08:06:33.793058  514309 client.go:171] duration metric: took 12.599813345s to LocalClient.Create
	I1002 08:06:33.793072  514309 start.go:167] duration metric: took 12.5998947s to libmachine.API.Create "auto-810803"
	I1002 08:06:33.793107  514309 start.go:293] postStartSetup for "auto-810803" (driver="docker")
	I1002 08:06:33.793125  514309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 08:06:33.793195  514309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 08:06:33.793263  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:33.828630  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:33.927845  514309 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 08:06:33.931939  514309 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 08:06:33.931970  514309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 08:06:33.931981  514309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/addons for local assets ...
	I1002 08:06:33.932040  514309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-292504/.minikube/files for local assets ...
	I1002 08:06:33.932131  514309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem -> 2943572.pem in /etc/ssl/certs
	I1002 08:06:33.932241  514309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 08:06:33.942468  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:06:33.969719  514309 start.go:296] duration metric: took 176.590466ms for postStartSetup
	I1002 08:06:33.970123  514309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810803
	I1002 08:06:33.992779  514309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/config.json ...
	I1002 08:06:33.993071  514309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 08:06:33.993119  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:34.016736  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:34.120605  514309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 08:06:34.127915  514309 start.go:128] duration metric: took 12.938500079s to createHost
	I1002 08:06:34.127938  514309 start.go:83] releasing machines lock for "auto-810803", held for 12.938621376s
	I1002 08:06:34.128013  514309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-810803
	I1002 08:06:34.149613  514309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 08:06:34.150564  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:34.150648  514309 ssh_runner.go:195] Run: cat /version.json
	I1002 08:06:34.152420  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:06:34.187527  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:34.187611  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:06:34.399717  514309 ssh_runner.go:195] Run: systemctl --version
	I1002 08:06:34.407801  514309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 08:06:34.462776  514309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 08:06:34.467674  514309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 08:06:34.467796  514309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 08:06:34.503810  514309 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 08:06:34.503881  514309 start.go:495] detecting cgroup driver to use...
	I1002 08:06:34.503944  514309 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 08:06:34.504051  514309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 08:06:34.528746  514309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 08:06:34.543789  514309 docker.go:218] disabling cri-docker service (if available) ...
	I1002 08:06:34.543926  514309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 08:06:34.562346  514309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 08:06:34.583364  514309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 08:06:34.732913  514309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 08:06:34.896247  514309 docker.go:234] disabling docker service ...
	I1002 08:06:34.896357  514309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 08:06:34.922730  514309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 08:06:34.940921  514309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 08:06:35.109624  514309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 08:06:35.264485  514309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 08:06:35.280201  514309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 08:06:35.295582  514309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 08:06:35.295647  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.305316  514309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 08:06:35.305387  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.314987  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.324058  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.333817  514309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 08:06:35.342653  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.352215  514309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.366389  514309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 08:06:35.375763  514309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 08:06:35.384588  514309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 08:06:35.393379  514309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:06:35.539848  514309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 08:06:35.923550  514309 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 08:06:35.923673  514309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 08:06:35.930484  514309 start.go:563] Will wait 60s for crictl version
	I1002 08:06:35.930651  514309 ssh_runner.go:195] Run: which crictl
	I1002 08:06:35.937848  514309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 08:06:35.978622  514309 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 08:06:35.978704  514309 ssh_runner.go:195] Run: crio --version
	I1002 08:06:36.012943  514309 ssh_runner.go:195] Run: crio --version
	I1002 08:06:36.057011  514309 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1002 08:06:34.341272  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:36.343895  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:38.345834  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:36.059964  514309 cli_runner.go:164] Run: docker network inspect auto-810803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 08:06:36.081451  514309 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 08:06:36.085527  514309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:06:36.100905  514309 kubeadm.go:883] updating cluster {Name:auto-810803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 08:06:36.101007  514309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 08:06:36.101070  514309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:06:36.145388  514309 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:06:36.145407  514309 crio.go:433] Images already preloaded, skipping extraction
	I1002 08:06:36.145460  514309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 08:06:36.173505  514309 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 08:06:36.173580  514309 cache_images.go:85] Images are preloaded, skipping loading
	I1002 08:06:36.173602  514309 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 08:06:36.173719  514309 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-810803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 08:06:36.173838  514309 ssh_runner.go:195] Run: crio config
	I1002 08:06:36.242887  514309 cni.go:84] Creating CNI manager for ""
	I1002 08:06:36.242952  514309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:06:36.242981  514309 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 08:06:36.243036  514309 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-810803 NodeName:auto-810803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 08:06:36.243213  514309 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-810803"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 08:06:36.243321  514309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 08:06:36.252219  514309 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 08:06:36.252332  514309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 08:06:36.261559  514309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1002 08:06:36.276467  514309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 08:06:36.291197  514309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1002 08:06:36.307256  514309 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 08:06:36.311187  514309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 08:06:36.321616  514309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:06:36.480274  514309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:06:36.496384  514309 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803 for IP: 192.168.85.2
	I1002 08:06:36.496455  514309 certs.go:195] generating shared ca certs ...
	I1002 08:06:36.496486  514309 certs.go:227] acquiring lock for ca certs: {Name:mk1001d0c4f64a60703dbacc19b9aaad0c1438c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:36.496663  514309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key
	I1002 08:06:36.496737  514309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key
	I1002 08:06:36.496777  514309 certs.go:257] generating profile certs ...
	I1002 08:06:36.496858  514309 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.key
	I1002 08:06:36.496895  514309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt with IP's: []
	I1002 08:06:37.232198  514309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt ...
	I1002 08:06:37.232230  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: {Name:mkea2e55c1e1ae8aecf9c1c8462a12f6c15e1737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:37.232427  514309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.key ...
	I1002 08:06:37.232445  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.key: {Name:mkfb4c7d19a8a0ace68a5273fd7f48046a8d5252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:37.232552  514309 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key.64edb8c6
	I1002 08:06:37.232573  514309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt.64edb8c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 08:06:37.798694  514309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt.64edb8c6 ...
	I1002 08:06:37.798727  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt.64edb8c6: {Name:mkf538a64679a31792ccc2e75ed53d24bfa09749 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:37.798990  514309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key.64edb8c6 ...
	I1002 08:06:37.799013  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key.64edb8c6: {Name:mk12f0b15d1b4207ceb18248fc51f37b122ea6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:37.799136  514309 certs.go:382] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt.64edb8c6 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt
	I1002 08:06:37.799229  514309 certs.go:386] copying /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key.64edb8c6 -> /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key
	I1002 08:06:37.799294  514309 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.key
	I1002 08:06:37.799314  514309 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.crt with IP's: []
	I1002 08:06:39.538739  514309 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.crt ...
	I1002 08:06:39.538774  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.crt: {Name:mkdc94d432f1549d2e610bf2c7f17aabd64b281c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:39.538945  514309 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.key ...
	I1002 08:06:39.538962  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.key: {Name:mk336416a341911b7f8763dd5dcda16c70e1a472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:06:39.539157  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem (1338 bytes)
	W1002 08:06:39.539200  514309 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357_empty.pem, impossibly tiny 0 bytes
	I1002 08:06:39.539214  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 08:06:39.539238  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/ca.pem (1082 bytes)
	I1002 08:06:39.539264  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/cert.pem (1123 bytes)
	I1002 08:06:39.539301  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/certs/key.pem (1675 bytes)
	I1002 08:06:39.539349  514309 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem (1708 bytes)
	I1002 08:06:39.540004  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 08:06:39.573029  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 08:06:39.600417  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 08:06:39.644639  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 08:06:39.679796  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 08:06:39.709222  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 08:06:39.727254  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 08:06:39.745552  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 08:06:39.763922  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/ssl/certs/2943572.pem --> /usr/share/ca-certificates/2943572.pem (1708 bytes)
	I1002 08:06:39.782532  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 08:06:39.801147  514309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-292504/.minikube/certs/294357.pem --> /usr/share/ca-certificates/294357.pem (1338 bytes)
	I1002 08:06:39.819691  514309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 08:06:39.832834  514309 ssh_runner.go:195] Run: openssl version
	I1002 08:06:39.839754  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2943572.pem && ln -fs /usr/share/ca-certificates/2943572.pem /etc/ssl/certs/2943572.pem"
	I1002 08:06:39.848684  514309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2943572.pem
	I1002 08:06:39.853027  514309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 06:48 /usr/share/ca-certificates/2943572.pem
	I1002 08:06:39.853096  514309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2943572.pem
	I1002 08:06:39.909092  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2943572.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 08:06:39.918557  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 08:06:39.930252  514309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:06:39.943389  514309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:42 /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:06:39.943473  514309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 08:06:40.038797  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 08:06:40.049472  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294357.pem && ln -fs /usr/share/ca-certificates/294357.pem /etc/ssl/certs/294357.pem"
	I1002 08:06:40.065496  514309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294357.pem
	I1002 08:06:40.075925  514309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 06:48 /usr/share/ca-certificates/294357.pem
	I1002 08:06:40.075999  514309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294357.pem
	I1002 08:06:40.122720  514309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294357.pem /etc/ssl/certs/51391683.0"
	I1002 08:06:40.132352  514309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 08:06:40.137484  514309 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 08:06:40.137538  514309 kubeadm.go:400] StartCluster: {Name:auto-810803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-810803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 08:06:40.137621  514309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 08:06:40.137685  514309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 08:06:40.170605  514309 cri.go:89] found id: ""
	I1002 08:06:40.170690  514309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 08:06:40.181705  514309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 08:06:40.190749  514309 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 08:06:40.190821  514309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 08:06:40.202460  514309 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 08:06:40.202483  514309 kubeadm.go:157] found existing configuration files:
	
	I1002 08:06:40.202547  514309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 08:06:40.212487  514309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 08:06:40.212571  514309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 08:06:40.220806  514309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 08:06:40.230001  514309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 08:06:40.230089  514309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 08:06:40.238517  514309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 08:06:40.247823  514309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 08:06:40.247897  514309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 08:06:40.255813  514309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 08:06:40.264744  514309 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 08:06:40.264814  514309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 08:06:40.272700  514309 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 08:06:40.324447  514309 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 08:06:40.324833  514309 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 08:06:40.355978  514309 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 08:06:40.356058  514309 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 08:06:40.356116  514309 kubeadm.go:318] OS: Linux
	I1002 08:06:40.356169  514309 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 08:06:40.356226  514309 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 08:06:40.356282  514309 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 08:06:40.356340  514309 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 08:06:40.356395  514309 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 08:06:40.356449  514309 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 08:06:40.356501  514309 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 08:06:40.356555  514309 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 08:06:40.356607  514309 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 08:06:40.464156  514309 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 08:06:40.464298  514309 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 08:06:40.464402  514309 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 08:06:40.479464  514309 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 08:06:40.484763  514309 out.go:252]   - Generating certificates and keys ...
	I1002 08:06:40.484884  514309 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 08:06:40.484978  514309 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1002 08:06:40.838434  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:42.838909  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:41.347611  514309 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 08:06:41.614514  514309 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 08:06:42.484925  514309 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 08:06:43.045653  514309 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 08:06:43.135982  514309 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 08:06:43.136545  514309 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-810803 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:06:43.665663  514309 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 08:06:43.666285  514309 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-810803 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 08:06:44.417444  514309 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 08:06:45.234236  514309 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	W1002 08:06:44.839706  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:46.841176  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:45.987163  514309 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 08:06:45.987513  514309 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 08:06:46.923675  514309 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 08:06:47.149077  514309 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 08:06:47.977991  514309 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 08:06:49.371990  514309 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 08:06:49.464883  514309 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 08:06:49.465573  514309 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 08:06:49.468364  514309 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 08:06:49.472105  514309 out.go:252]   - Booting up control plane ...
	I1002 08:06:49.472223  514309 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 08:06:49.472311  514309 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 08:06:49.472385  514309 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 08:06:49.487745  514309 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 08:06:49.487867  514309 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 08:06:49.495395  514309 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 08:06:49.496379  514309 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 08:06:49.496433  514309 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 08:06:49.644417  514309 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 08:06:49.644822  514309 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 08:06:50.650003  514309 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001199448s
	I1002 08:06:50.650145  514309 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 08:06:50.650247  514309 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 08:06:50.650346  514309 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 08:06:50.650439  514309 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1002 08:06:48.843970  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:51.337766  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:57.242103  514309 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.592236058s
	I1002 08:06:57.443496  514309 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.793812651s
	I1002 08:06:57.651183  514309 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001377976s
	I1002 08:06:57.670927  514309 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 08:06:57.685926  514309 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 08:06:57.700853  514309 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 08:06:57.701090  514309 kubeadm.go:318] [mark-control-plane] Marking the node auto-810803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 08:06:57.713614  514309 kubeadm.go:318] [bootstrap-token] Using token: rsphtv.zahzgr4n38b0kscw
	W1002 08:06:53.839366  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	W1002 08:06:56.341018  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:06:57.716520  514309 out.go:252]   - Configuring RBAC rules ...
	I1002 08:06:57.716661  514309 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 08:06:57.723628  514309 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 08:06:57.733981  514309 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 08:06:57.738196  514309 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 08:06:57.742649  514309 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 08:06:57.747276  514309 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 08:06:58.059114  514309 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 08:06:58.507231  514309 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 08:06:59.057893  514309 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 08:06:59.059191  514309 kubeadm.go:318] 
	I1002 08:06:59.059308  514309 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 08:06:59.059322  514309 kubeadm.go:318] 
	I1002 08:06:59.059405  514309 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 08:06:59.059410  514309 kubeadm.go:318] 
	I1002 08:06:59.059456  514309 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 08:06:59.059557  514309 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 08:06:59.059621  514309 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 08:06:59.059635  514309 kubeadm.go:318] 
	I1002 08:06:59.059691  514309 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 08:06:59.059707  514309 kubeadm.go:318] 
	I1002 08:06:59.059758  514309 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 08:06:59.059767  514309 kubeadm.go:318] 
	I1002 08:06:59.059821  514309 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 08:06:59.059905  514309 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 08:06:59.059983  514309 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 08:06:59.059993  514309 kubeadm.go:318] 
	I1002 08:06:59.060081  514309 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 08:06:59.060174  514309 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 08:06:59.060183  514309 kubeadm.go:318] 
	I1002 08:06:59.060270  514309 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token rsphtv.zahzgr4n38b0kscw \
	I1002 08:06:59.060380  514309 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf \
	I1002 08:06:59.060405  514309 kubeadm.go:318] 	--control-plane 
	I1002 08:06:59.060413  514309 kubeadm.go:318] 
	I1002 08:06:59.060502  514309 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 08:06:59.060510  514309 kubeadm.go:318] 
	I1002 08:06:59.060595  514309 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token rsphtv.zahzgr4n38b0kscw \
	I1002 08:06:59.060704  514309 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d03eccb52768cdf469980276c5a02cb215379f8ec4b6320d505d5d581cd4aeaf 
	I1002 08:06:59.066079  514309 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 08:06:59.066346  514309 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 08:06:59.066465  514309 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 08:06:59.066492  514309 cni.go:84] Creating CNI manager for ""
	I1002 08:06:59.066502  514309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 08:06:59.071688  514309 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 08:06:59.074715  514309 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 08:06:59.078943  514309 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 08:06:59.078963  514309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 08:06:59.092899  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 08:06:59.400439  514309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 08:06:59.400545  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:06:59.400575  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-810803 minikube.k8s.io/updated_at=2025_10_02T08_06_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=auto-810803 minikube.k8s.io/primary=true
	I1002 08:06:59.427904  514309 ops.go:34] apiserver oom_adj: -16
	I1002 08:06:59.558225  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:00.058997  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:00.559287  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1002 08:06:58.840131  511270 pod_ready.go:104] pod "coredns-66bc5c9577-cscrn" is not "Ready", error: <nil>
	I1002 08:07:00.371154  511270 pod_ready.go:94] pod "coredns-66bc5c9577-cscrn" is "Ready"
	I1002 08:07:00.371248  511270 pod_ready.go:86] duration metric: took 32.539690981s for pod "coredns-66bc5c9577-cscrn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.383728  511270 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.409064  511270 pod_ready.go:94] pod "etcd-default-k8s-diff-port-417078" is "Ready"
	I1002 08:07:00.409148  511270 pod_ready.go:86] duration metric: took 25.389511ms for pod "etcd-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.482501  511270 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.488418  511270 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-417078" is "Ready"
	I1002 08:07:00.488449  511270 pod_ready.go:86] duration metric: took 5.915336ms for pod "kube-apiserver-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.491476  511270 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.536605  511270 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-417078" is "Ready"
	I1002 08:07:00.536634  511270 pod_ready.go:86] duration metric: took 45.126727ms for pod "kube-controller-manager-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:00.736843  511270 pod_ready.go:83] waiting for pod "kube-proxy-g6hc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:01.136824  511270 pod_ready.go:94] pod "kube-proxy-g6hc4" is "Ready"
	I1002 08:07:01.136852  511270 pod_ready.go:86] duration metric: took 399.978002ms for pod "kube-proxy-g6hc4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:01.336100  511270 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:01.736772  511270 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-417078" is "Ready"
	I1002 08:07:01.736802  511270 pod_ready.go:86] duration metric: took 400.616966ms for pod "kube-scheduler-default-k8s-diff-port-417078" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 08:07:01.736816  511270 pod_ready.go:40] duration metric: took 33.921023761s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 08:07:01.793866  511270 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 08:07:01.798913  511270 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-417078" cluster and "default" namespace by default
	I1002 08:07:01.058621  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:01.559294  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:02.058568  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:02.559230  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:03.058610  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:03.558828  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:04.059222  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:04.559160  514309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 08:07:04.670461  514309 kubeadm.go:1113] duration metric: took 5.269978737s to wait for elevateKubeSystemPrivileges
	I1002 08:07:04.670495  514309 kubeadm.go:402] duration metric: took 24.532960537s to StartCluster
	I1002 08:07:04.670513  514309 settings.go:142] acquiring lock: {Name:mk77a6bf89241f3180d614c1507d4086429d94cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:07:04.670589  514309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 08:07:04.672625  514309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/kubeconfig: {Name:mk75d2449ff3bd948b637625e2aafd898a41d5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 08:07:04.673213  514309 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 08:07:04.675849  514309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 08:07:04.676188  514309 config.go:182] Loaded profile config "auto-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 08:07:04.676195  514309 out.go:179] * Verifying Kubernetes components...
	I1002 08:07:04.676599  514309 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 08:07:04.676696  514309 addons.go:69] Setting storage-provisioner=true in profile "auto-810803"
	I1002 08:07:04.676709  514309 addons.go:238] Setting addon storage-provisioner=true in "auto-810803"
	I1002 08:07:04.676737  514309 host.go:66] Checking if "auto-810803" exists ...
	I1002 08:07:04.677039  514309 addons.go:69] Setting default-storageclass=true in profile "auto-810803"
	I1002 08:07:04.677060  514309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-810803"
	I1002 08:07:04.677233  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:07:04.677383  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:07:04.680308  514309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 08:07:04.716131  514309 addons.go:238] Setting addon default-storageclass=true in "auto-810803"
	I1002 08:07:04.716172  514309 host.go:66] Checking if "auto-810803" exists ...
	I1002 08:07:04.716591  514309 cli_runner.go:164] Run: docker container inspect auto-810803 --format={{.State.Status}}
	I1002 08:07:04.725841  514309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 08:07:04.728760  514309 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:07:04.728783  514309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 08:07:04.728851  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:07:04.766924  514309 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 08:07:04.766948  514309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 08:07:04.767019  514309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-810803
	I1002 08:07:04.784695  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:07:04.814160  514309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/auto-810803/id_rsa Username:docker}
	I1002 08:07:04.946130  514309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 08:07:04.961674  514309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 08:07:05.019481  514309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 08:07:05.077307  514309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 08:07:05.497458  514309 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 08:07:05.499587  514309 node_ready.go:35] waiting up to 15m0s for node "auto-810803" to be "Ready" ...
	I1002 08:07:05.821907  514309 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 08:07:05.824759  514309 addons.go:514] duration metric: took 1.148148101s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 08:07:06.011097  514309 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-810803" context rescaled to 1 replicas
	W1002 08:07:07.502627  514309 node_ready.go:57] node "auto-810803" has "Ready":"False" status (will retry)
	W1002 08:07:09.502938  514309 node_ready.go:57] node "auto-810803" has "Ready":"False" status (will retry)
	W1002 08:07:11.506880  514309 node_ready.go:57] node "auto-810803" has "Ready":"False" status (will retry)
	W1002 08:07:14.008259  514309 node_ready.go:57] node "auto-810803" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.00347145Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.012108141Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.012149373Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.012173578Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.016029441Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.016068424Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.016094984Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.020087612Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.020124101Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.020150201Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.023936641Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.023973621Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.760247897Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=87379860-4028-425a-adeb-5bc5e14e6628 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.761525644Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a8e4ee1-9d28-4cda-874f-9cd9bc2de7a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.766425127Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t/dashboard-metrics-scraper" id=6ff1ef11-b3fd-43ba-ad64-64c461ca061b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.766705302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.776312439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.776841814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.795236003Z" level=info msg="Created container 89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t/dashboard-metrics-scraper" id=6ff1ef11-b3fd-43ba-ad64-64c461ca061b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.796873278Z" level=info msg="Starting container: 89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283" id=fd4cf0df-6150-49f9-a090-2c2800db2b47 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 08:07:06 default-k8s-diff-port-417078 conmon[1692]: conmon 89a092255a6551e8d029 <ninfo>: container 1694 exited with status 1
	Oct 02 08:07:06 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:06.810225073Z" level=info msg="Started container" PID=1694 containerID=89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t/dashboard-metrics-scraper id=fd4cf0df-6150-49f9-a090-2c2800db2b47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8229060c645ab689f9fc104345fe6238ca4372a8e9894308d7b7018d8b4b063
	Oct 02 08:07:07 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:07.076086643Z" level=info msg="Removing container: 2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff" id=9c7822f4-3c0a-495a-a6a0-12d8f2823ca3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:07:07 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:07.086889303Z" level=info msg="Error loading conmon cgroup of container 2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff: cgroup deleted" id=9c7822f4-3c0a-495a-a6a0-12d8f2823ca3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 08:07:07 default-k8s-diff-port-417078 crio[650]: time="2025-10-02T08:07:07.092026442Z" level=info msg="Removed container 2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t/dashboard-metrics-scraper" id=9c7822f4-3c0a-495a-a6a0-12d8f2823ca3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	89a092255a655       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   2                   d8229060c645a       dashboard-metrics-scraper-6ffb444bf9-wrn9t             kubernetes-dashboard
	60656c47cfe1b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   23095c8361628       storage-provisioner                                    kube-system
	3424e6b891d1d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   d1f748f8546e8       kubernetes-dashboard-855c9754f9-zm2mb                  kubernetes-dashboard
	7857d6a2c27ee       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   1b7eace8b394b       coredns-66bc5c9577-cscrn                               kube-system
	1c2b537ef32d1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   03ecb93eb54f2       kube-proxy-g6hc4                                       kube-system
	3e3390ef7a71e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   23095c8361628       storage-provisioner                                    kube-system
	5e20db31d5509       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   06c396ee8301f       kindnet-xvmxj                                          kube-system
	8df20169712b9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   6fdfe172c56df       busybox                                                default
	58e9ec4d18140       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a395b5248ad0c       etcd-default-k8s-diff-port-417078                      kube-system
	3fe23dab4fa0b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   f984eb2c0b35a       kube-controller-manager-default-k8s-diff-port-417078   kube-system
	51204ad2326f2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   edddaf4b2aa6e       kube-scheduler-default-k8s-diff-port-417078            kube-system
	3a01e925d0339       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   51f94073835c2       kube-apiserver-default-k8s-diff-port-417078            kube-system
	
	
	==> coredns [7857d6a2c27eedcc3e1e3425fc86feebd1ed00455b0b25e76849e78058d175a8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43124 - 46038 "HINFO IN 5148158578930309505.2170458187798132499. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018081981s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-417078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-417078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=default-k8s-diff-port-417078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T08_04_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 08:04:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-417078
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 08:07:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 08:06:54 +0000   Thu, 02 Oct 2025 08:04:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 08:06:54 +0000   Thu, 02 Oct 2025 08:04:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 08:06:54 +0000   Thu, 02 Oct 2025 08:04:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 08:06:54 +0000   Thu, 02 Oct 2025 08:05:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-417078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2a4ec1c6a264eebaea9df903962cb0c
	  System UUID:                f4fac9d3-943a-43ee-b70b-67637923d71e
	  Boot ID:                    7d0f8d16-987d-4df1-90e3-15584f970729
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-cscrn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-417078                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-xvmxj                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-417078             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-417078    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-g6hc4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-default-k8s-diff-port-417078             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wrn9t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zm2mb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-417078 event: Registered Node default-k8s-diff-port-417078 in Controller
	  Normal   NodeReady                97s                    kubelet          Node default-k8s-diff-port-417078 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-417078 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node default-k8s-diff-port-417078 event: Registered Node default-k8s-diff-port-417078 in Controller
	
	
	==> dmesg <==
	[Oct 2 07:37] overlayfs: idmapped layers are currently not supported
	[ +15.983625] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:39] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:40] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:41] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:42] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:48] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:50] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:56] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 07:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:00] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:03] overlayfs: idmapped layers are currently not supported
	[ +38.953360] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:05] overlayfs: idmapped layers are currently not supported
	[Oct 2 08:06] overlayfs: idmapped layers are currently not supported
	[ +14.824071] overlayfs: idmapped layers are currently not supported
	[ +33.610286] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [58e9ec4d181400c19075bad03bd7c590fa61e2f6e890fe6423d6ab1e2a40928d] <==
	{"level":"warn","ts":"2025-10-02T08:06:22.209404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.253192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.265221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.289997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.316419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.354850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.383992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.421293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.448297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.461679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.489386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.513976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.579179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.580023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.608187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T08:06:22.775454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41952","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T08:06:26.447319Z","caller":"traceutil/trace.go:172","msg":"trace[1904468523] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"112.252057ms","start":"2025-10-02T08:06:26.335051Z","end":"2025-10-02T08:06:26.447303Z","steps":["trace[1904468523] 'process raft request'  (duration: 72.197107ms)","trace[1904468523] 'compare'  (duration: 39.793352ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T08:06:26.447583Z","caller":"traceutil/trace.go:172","msg":"trace[1959837794] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"112.427288ms","start":"2025-10-02T08:06:26.335146Z","end":"2025-10-02T08:06:26.447573Z","steps":["trace[1959837794] 'process raft request'  (duration: 112.015928ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T08:06:26.607999Z","caller":"traceutil/trace.go:172","msg":"trace[846315445] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"108.541811ms","start":"2025-10-02T08:06:26.499441Z","end":"2025-10-02T08:06:26.607982Z","steps":["trace[846315445] 'process raft request'  (duration: 62.242224ms)","trace[846315445] 'compare'  (duration: 46.051396ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T08:06:26.608196Z","caller":"traceutil/trace.go:172","msg":"trace[614259474] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"108.564721ms","start":"2025-10-02T08:06:26.499625Z","end":"2025-10-02T08:06:26.608190Z","steps":["trace[614259474] 'process raft request'  (duration: 108.191294ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T08:06:26.764790Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.351948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T08:06:26.764952Z","caller":"traceutil/trace.go:172","msg":"trace[1200747866] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:551; }","duration":"104.527596ms","start":"2025-10-02T08:06:26.660409Z","end":"2025-10-02T08:06:26.764937Z","steps":["trace[1200747866] 'agreement among raft nodes before linearized reading'  (duration: 66.617018ms)","trace[1200747866] 'range keys from in-memory index tree'  (duration: 37.716197ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T08:06:26.764892Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.634342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:ttl-after-finished-controller\" limit:1 ","response":"range_response_count:1 size:695"}
	{"level":"info","ts":"2025-10-02T08:06:26.765442Z","caller":"traceutil/trace.go:172","msg":"trace[834674283] range","detail":"{range_begin:/registry/clusterroles/system:controller:ttl-after-finished-controller; range_end:; response_count:1; response_revision:551; }","duration":"117.1883ms","start":"2025-10-02T08:06:26.648241Z","end":"2025-10-02T08:06:26.765429Z","steps":["trace[834674283] 'agreement among raft nodes before linearized reading'  (duration: 78.791776ms)","trace[834674283] 'range keys from in-memory index tree'  (duration: 37.779328ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T08:06:26.765314Z","caller":"traceutil/trace.go:172","msg":"trace[756088777] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"116.837249ms","start":"2025-10-02T08:06:26.648465Z","end":"2025-10-02T08:06:26.765302Z","steps":["trace[756088777] 'process raft request'  (duration: 78.591816ms)","trace[756088777] 'compare'  (duration: 37.677969ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:07:19 up  2:49,  0 user,  load average: 3.97, 3.41, 2.50
	Linux default-k8s-diff-port-417078 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e20db31d550901a0af4d1d01bbd43e4c4e376a5f51d16b6befe7b4fd80f53fc] <==
	I1002 08:06:25.803750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 08:06:25.803988       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 08:06:25.804117       1 main.go:148] setting mtu 1500 for CNI 
	I1002 08:06:25.804130       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 08:06:25.804140       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T08:06:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 08:06:26.000728       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 08:06:26.000758       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 08:06:26.000767       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 08:06:26.001116       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 08:06:56.001205       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 08:06:56.001211       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 08:06:56.001356       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 08:06:56.001506       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 08:06:57.601082       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 08:06:57.601216       1 metrics.go:72] Registering metrics
	I1002 08:06:57.601335       1 controller.go:711] "Syncing nftables rules"
	I1002 08:07:06.002974       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:07:06.003061       1 main.go:301] handling current node
	I1002 08:07:16.004393       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 08:07:16.004441       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3a01e925d0339bb867ed641377431c1c576bffc854679e92eb2e19a036a34feb] <==
	I1002 08:06:24.304378       1 cache.go:39] Caches are synced for autoregister controller
	I1002 08:06:24.307254       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 08:06:24.309114       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 08:06:24.309133       1 policy_source.go:240] refreshing policies
	I1002 08:06:24.326216       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 08:06:24.334908       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 08:06:24.334932       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 08:06:24.343714       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 08:06:24.343831       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 08:06:24.344026       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 08:06:24.344084       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 08:06:24.360092       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 08:06:24.438834       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1002 08:06:24.555531       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 08:06:24.785607       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 08:06:24.997974       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 08:06:26.174646       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 08:06:26.633700       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 08:06:26.891748       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 08:06:26.948300       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 08:06:27.095316       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.119.123"}
	I1002 08:06:27.126740       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.144.43"}
	I1002 08:06:28.950092       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 08:06:29.044555       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 08:06:29.195757       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3fe23dab4fa0ba272028a64c70d3af8948cb437fb69796d50bf0133f85d526af] <==
	I1002 08:06:28.900279       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 08:06:28.907635       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 08:06:28.907679       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 08:06:28.907704       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 08:06:28.909394       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 08:06:28.909430       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 08:06:28.915185       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 08:06:28.915278       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 08:06:28.915310       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 08:06:28.915431       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 08:06:28.918961       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 08:06:28.919383       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 08:06:28.927353       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 08:06:28.935865       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 08:06:28.936430       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 08:06:28.937393       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 08:06:28.937455       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 08:06:28.937558       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 08:06:28.942223       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 08:06:28.943707       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 08:06:28.945608       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 08:06:28.979758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:06:29.006353       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 08:06:29.006381       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 08:06:29.006389       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1c2b537ef32d116dc218025592702865324dd99cf3c1c074eda8168c73deb8fb] <==
	I1002 08:06:26.218782       1 server_linux.go:53] "Using iptables proxy"
	I1002 08:06:26.387751       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 08:06:26.488031       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 08:06:26.488636       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 08:06:26.488738       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 08:06:26.688430       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 08:06:26.688489       1 server_linux.go:132] "Using iptables Proxier"
	I1002 08:06:26.692996       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 08:06:26.693524       1 server.go:527] "Version info" version="v1.34.1"
	I1002 08:06:26.693545       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:06:26.698043       1 config.go:200] "Starting service config controller"
	I1002 08:06:26.698064       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 08:06:26.698086       1 config.go:106] "Starting endpoint slice config controller"
	I1002 08:06:26.698090       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 08:06:26.698107       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 08:06:26.698112       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 08:06:26.707870       1 config.go:309] "Starting node config controller"
	I1002 08:06:26.707903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 08:06:26.707914       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 08:06:26.799612       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 08:06:26.799731       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 08:06:26.799772       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [51204ad2326f23863feeb5f81eec088fffc09135e7fccfb05c306b274a31f295] <==
	I1002 08:06:23.847580       1 serving.go:386] Generated self-signed cert in-memory
	I1002 08:06:26.793379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 08:06:26.793413       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 08:06:26.837726       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 08:06:26.837767       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 08:06:26.837821       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:26.837837       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 08:06:26.837860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:26.837875       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:26.843522       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 08:06:26.843486       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 08:06:26.938337       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 08:06:26.938416       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 08:06:26.938512       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 08:06:30 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:30.631114     777 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/60d8096f-e9e3-4d0e-8f16-67ab47b4563e-kube-api-access-pvrv9 podName:60d8096f-e9e3-4d0e-8f16-67ab47b4563e nodeName:}" failed. No retries permitted until 2025-10-02 08:06:31.131065744 +0000 UTC m=+14.692866295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pvrv9" (UniqueName: "kubernetes.io/projected/60d8096f-e9e3-4d0e-8f16-67ab47b4563e-kube-api-access-pvrv9") pod "kubernetes-dashboard-855c9754f9-zm2mb" (UID: "60d8096f-e9e3-4d0e-8f16-67ab47b4563e") : failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:06:30 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:30.633039     777 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:06:30 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:30.633185     777 projected.go:196] Error preparing data for projected volume kube-api-access-dzgwt for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:06:30 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:30.633283     777 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34-kube-api-access-dzgwt podName:0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34 nodeName:}" failed. No retries permitted until 2025-10-02 08:06:31.133260251 +0000 UTC m=+14.695060801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzgwt" (UniqueName: "kubernetes.io/projected/0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34-kube-api-access-dzgwt") pod "dashboard-metrics-scraper-6ffb444bf9-wrn9t" (UID: "0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34") : failed to sync configmap cache: timed out waiting for the condition
	Oct 02 08:06:31 default-k8s-diff-port-417078 kubelet[777]: W1002 08:06:31.497864     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/crio-d1f748f8546e88eb6498f25a864ece614cd728f04c4f6b5aaa44471834f291e6 WatchSource:0}: Error finding container d1f748f8546e88eb6498f25a864ece614cd728f04c4f6b5aaa44471834f291e6: Status 404 returned error can't find the container with id d1f748f8546e88eb6498f25a864ece614cd728f04c4f6b5aaa44471834f291e6
	Oct 02 08:06:31 default-k8s-diff-port-417078 kubelet[777]: W1002 08:06:31.525419     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b8a295e3342b217780fd21a8eb2d873d6dd517d07759502568fe81fa99fecba/crio-d8229060c645ab689f9fc104345fe6238ca4372a8e9894308d7b7018d8b4b063 WatchSource:0}: Error finding container d8229060c645ab689f9fc104345fe6238ca4372a8e9894308d7b7018d8b4b063: Status 404 returned error can't find the container with id d8229060c645ab689f9fc104345fe6238ca4372a8e9894308d7b7018d8b4b063
	Oct 02 08:06:39 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:39.046356     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zm2mb" podStartSLOduration=3.282872643 podStartE2EDuration="10.04633666s" podCreationTimestamp="2025-10-02 08:06:29 +0000 UTC" firstStartedPulling="2025-10-02 08:06:31.502452403 +0000 UTC m=+15.064252954" lastFinishedPulling="2025-10-02 08:06:38.26591642 +0000 UTC m=+21.827716971" observedRunningTime="2025-10-02 08:06:39.045963036 +0000 UTC m=+22.607763611" watchObservedRunningTime="2025-10-02 08:06:39.04633666 +0000 UTC m=+22.608137210"
	Oct 02 08:06:46 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:46.009074     777 scope.go:117] "RemoveContainer" containerID="ec70030b03597ebd0d38458a23a7978b766dc8556e2d715600829de5014c2d04"
	Oct 02 08:06:47 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:47.013021     777 scope.go:117] "RemoveContainer" containerID="ec70030b03597ebd0d38458a23a7978b766dc8556e2d715600829de5014c2d04"
	Oct 02 08:06:47 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:47.013336     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:06:47 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:47.013479     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:06:48 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:48.018293     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:06:48 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:48.019007     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:06:51 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:51.500936     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:06:51 default-k8s-diff-port-417078 kubelet[777]: E1002 08:06:51.501143     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:06:57 default-k8s-diff-port-417078 kubelet[777]: I1002 08:06:57.042654     777 scope.go:117] "RemoveContainer" containerID="3e3390ef7a71ec7064e94b1c428bc44ed214876f28e31ea3bc944aab82217db4"
	Oct 02 08:07:06 default-k8s-diff-port-417078 kubelet[777]: I1002 08:07:06.759828     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:07:07 default-k8s-diff-port-417078 kubelet[777]: I1002 08:07:07.073492     777 scope.go:117] "RemoveContainer" containerID="2c87acd45f0995bdbf4842bf3e417ee0b9de20a5c3502c9ace3b240d421cc2ff"
	Oct 02 08:07:07 default-k8s-diff-port-417078 kubelet[777]: I1002 08:07:07.073847     777 scope.go:117] "RemoveContainer" containerID="89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283"
	Oct 02 08:07:07 default-k8s-diff-port-417078 kubelet[777]: E1002 08:07:07.074100     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:07:11 default-k8s-diff-port-417078 kubelet[777]: I1002 08:07:11.501250     777 scope.go:117] "RemoveContainer" containerID="89a092255a6551e8d029774a61b80e3deae1f18d316632be4c9595a6fce3e283"
	Oct 02 08:07:11 default-k8s-diff-port-417078 kubelet[777]: E1002 08:07:11.501947     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrn9t_kubernetes-dashboard(0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrn9t" podUID="0e3ee38d-2496-4a3e-9e6b-c9b74d7a3d34"
	Oct 02 08:07:14 default-k8s-diff-port-417078 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 08:07:14 default-k8s-diff-port-417078 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 08:07:14 default-k8s-diff-port-417078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [3424e6b891d1d444a5fd9113b3934912df22aa8b2559334195df2b60a5decea2] <==
	2025/10/02 08:06:38 Using namespace: kubernetes-dashboard
	2025/10/02 08:06:38 Using in-cluster config to connect to apiserver
	2025/10/02 08:06:38 Using secret token for csrf signing
	2025/10/02 08:06:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 08:06:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 08:06:38 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 08:06:38 Generating JWE encryption key
	2025/10/02 08:06:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 08:06:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 08:06:40 Initializing JWE encryption key from synchronized object
	2025/10/02 08:06:40 Creating in-cluster Sidecar client
	2025/10/02 08:06:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:06:40 Serving insecurely on HTTP port: 9090
	2025/10/02 08:07:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 08:06:38 Starting overwatch
	
	
	==> storage-provisioner [3e3390ef7a71ec7064e94b1c428bc44ed214876f28e31ea3bc944aab82217db4] <==
	I1002 08:06:26.147496       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 08:06:56.149703       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [60656c47cfe1b3b0b174507dbed097964a91a1226d4508163960b2e21510a0fe] <==
	I1002 08:06:57.131445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 08:06:57.172598       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 08:06:57.173226       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 08:06:57.175669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:00.630748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:04.896072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:08.495465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:11.548872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:14.570843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:14.578662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:07:14.578802       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 08:07:14.579005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417078_865da2d9-fe60-4786-9791-4e1237283d1f!
	I1002 08:07:14.580117       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3841bb3-24e2-47d7-9ba0-774032dd0ed1", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-417078_865da2d9-fe60-4786-9791-4e1237283d1f became leader
	W1002 08:07:14.589286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:14.601182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 08:07:14.679956       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-417078_865da2d9-fe60-4786-9791-4e1237283d1f!
	W1002 08:07:16.604874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:16.612456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:18.617427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 08:07:18.623151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078: exit status 2 (363.340471ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-417078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.29s)
E1002 08:12:31.982601  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:45.609394  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:49.716635  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:49.723062  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:49.734425  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:49.755779  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:49.797112  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:49.878492  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:50.039954  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:50.361292  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:51.003394  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:52.284869  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:54.846132  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:12:59.967350  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:04.337114  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:10.208653  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:13.311208  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (249/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.56
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.15
18 TestDownloadOnly/v1.34.1/DeleteAll 0.33
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 174.61
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.78
48 TestAddons/StoppedEnableDisable 12.21
49 TestCertOptions 39.51
50 TestCertExpiration 335.21
59 TestErrorSpam/setup 33.06
60 TestErrorSpam/start 0.79
61 TestErrorSpam/status 1.12
62 TestErrorSpam/pause 6.94
63 TestErrorSpam/unpause 5.83
64 TestErrorSpam/stop 1.43
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.71
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
76 TestFunctional/serial/CacheCmd/cache/add_local 1.07
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
81 TestFunctional/serial/CacheCmd/cache/delete 0.13
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
84 TestFunctional/serial/ExtraConfig 49.47
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.52
87 TestFunctional/serial/LogsFileCmd 1.48
88 TestFunctional/serial/InvalidService 4.01
90 TestFunctional/parallel/ConfigCmd 0.46
91 TestFunctional/parallel/DashboardCmd 7.57
92 TestFunctional/parallel/DryRun 0.47
93 TestFunctional/parallel/InternationalLanguage 0.23
94 TestFunctional/parallel/StatusCmd 1.06
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 24.75
102 TestFunctional/parallel/SSHCmd 0.52
103 TestFunctional/parallel/CpCmd 2.07
105 TestFunctional/parallel/FileSync 0.32
106 TestFunctional/parallel/CertSync 2.22
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
114 TestFunctional/parallel/License 0.37
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 1.19
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
122 TestFunctional/parallel/ImageCommands/Setup 0.66
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.32
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ServiceCmd/List 0.51
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
151 TestFunctional/parallel/ProfileCmd/profile_list 0.43
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
153 TestFunctional/parallel/MountCmd/any-port 8.15
154 TestFunctional/parallel/MountCmd/specific-port 2.01
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.03
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 205.18
164 TestMultiControlPlane/serial/DeployApp 6.4
165 TestMultiControlPlane/serial/PingHostFromPods 1.5
166 TestMultiControlPlane/serial/AddWorkerNode 61.72
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.32
169 TestMultiControlPlane/serial/CopyFile 19.32
170 TestMultiControlPlane/serial/StopSecondaryNode 12.74
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
172 TestMultiControlPlane/serial/RestartSecondaryNode 32.36
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
185 TestJSONOutput/start/Command 85.83
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.71
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 40.89
211 TestKicCustomNetwork/use_default_bridge_network 38.35
212 TestKicExistingNetwork 37.56
213 TestKicCustomSubnet 38.33
214 TestKicStaticIP 39.01
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 76.55
219 TestMountStart/serial/StartWithMountFirst 9.65
220 TestMountStart/serial/VerifyMountFirst 0.3
221 TestMountStart/serial/StartWithMountSecond 7.86
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.21
226 TestMountStart/serial/RestartStopped 7.83
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 143.59
231 TestMultiNode/serial/DeployApp2Nodes 5.13
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 59.64
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.48
237 TestMultiNode/serial/StopNode 2.3
238 TestMultiNode/serial/StartAfterStop 8.16
239 TestMultiNode/serial/RestartKeepsNodes 74.93
240 TestMultiNode/serial/DeleteNode 5.62
241 TestMultiNode/serial/StopMultiNode 23.74
242 TestMultiNode/serial/RestartMultiNode 47.54
243 TestMultiNode/serial/ValidateNameConflict 35.59
248 TestPreload 130.46
253 TestInsufficientStorage 11.09
254 TestRunningBinaryUpgrade 54.92
256 TestKubernetesUpgrade 355.81
257 TestMissingContainerUpgrade 112.34
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 44.01
261 TestNoKubernetes/serial/StartWithStopK8s 35.8
262 TestNoKubernetes/serial/Start 10.62
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
264 TestNoKubernetes/serial/ProfileList 1.29
265 TestNoKubernetes/serial/Stop 1.31
266 TestNoKubernetes/serial/StartNoArgs 7.7
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
268 TestStoppedBinaryUpgrade/Setup 0.71
269 TestStoppedBinaryUpgrade/Upgrade 61.05
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
279 TestPause/serial/Start 82.93
280 TestPause/serial/SecondStartNoReconfiguration 30.45
289 TestNetworkPlugins/group/false 3.64
294 TestStartStop/group/old-k8s-version/serial/FirstStart 58.98
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.11
297 TestStartStop/group/old-k8s-version/serial/Stop 11.91
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
299 TestStartStop/group/old-k8s-version/serial/SecondStart 48.86
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
305 TestStartStop/group/no-preload/serial/FirstStart 79.09
307 TestStartStop/group/embed-certs/serial/FirstStart 89.66
308 TestStartStop/group/no-preload/serial/DeployApp 8.41
310 TestStartStop/group/no-preload/serial/Stop 11.92
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/no-preload/serial/SecondStart 52.56
313 TestStartStop/group/embed-certs/serial/DeployApp 8.47
315 TestStartStop/group/embed-certs/serial/Stop 12.3
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/embed-certs/serial/SecondStart 59.01
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.32
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
329 TestStartStop/group/newest-cni/serial/FirstStart 42.45
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.49
331 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/Stop 1.3
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
336 TestStartStop/group/newest-cni/serial/SecondStart 15.37
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.98
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.34
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.9
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
344 TestNetworkPlugins/group/auto/Start 88.16
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
349 TestNetworkPlugins/group/flannel/Start 63.01
350 TestNetworkPlugins/group/auto/KubeletFlags 0.47
351 TestNetworkPlugins/group/auto/NetCatPod 12.39
352 TestNetworkPlugins/group/auto/DNS 0.19
353 TestNetworkPlugins/group/auto/Localhost 0.16
354 TestNetworkPlugins/group/auto/HairPin 0.13
355 TestNetworkPlugins/group/kindnet/Start 85.58
356 TestNetworkPlugins/group/flannel/ControllerPod 6.01
357 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
358 TestNetworkPlugins/group/flannel/NetCatPod 11.26
359 TestNetworkPlugins/group/flannel/DNS 0.2
360 TestNetworkPlugins/group/flannel/Localhost 0.17
361 TestNetworkPlugins/group/flannel/HairPin 0.15
362 TestNetworkPlugins/group/enable-default-cni/Start 51.26
363 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
365 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.31
368 TestNetworkPlugins/group/kindnet/DNS 0.16
369 TestNetworkPlugins/group/kindnet/Localhost 0.13
370 TestNetworkPlugins/group/kindnet/HairPin 0.14
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
374 TestNetworkPlugins/group/bridge/Start 84.14
375 TestNetworkPlugins/group/calico/Start 63.72
376 TestNetworkPlugins/group/calico/ControllerPod 6.01
377 TestNetworkPlugins/group/calico/KubeletFlags 0.31
378 TestNetworkPlugins/group/calico/NetCatPod 10.29
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
380 TestNetworkPlugins/group/bridge/NetCatPod 11.28
381 TestNetworkPlugins/group/calico/DNS 0.23
382 TestNetworkPlugins/group/calico/Localhost 0.15
383 TestNetworkPlugins/group/calico/HairPin 0.16
384 TestNetworkPlugins/group/bridge/DNS 0.23
385 TestNetworkPlugins/group/bridge/Localhost 0.18
386 TestNetworkPlugins/group/bridge/HairPin 0.19
387 TestNetworkPlugins/group/custom-flannel/Start 59.95
388 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
389 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
390 TestNetworkPlugins/group/custom-flannel/DNS 0.15
391 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
392 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (6.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-954800 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-954800 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.256044948s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 06:41:25.951558  294357 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1002 06:41:25.951638  294357 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-954800
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-954800: exit status 85 (101.603622ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-954800 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-954800 │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:41:19
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:41:19.745277  294362 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:41:19.745469  294362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:19.745482  294362 out.go:374] Setting ErrFile to fd 2...
	I1002 06:41:19.745488  294362 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:19.745885  294362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	W1002 06:41:19.746082  294362 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21643-292504/.minikube/config/config.json: open /home/jenkins/minikube-integration/21643-292504/.minikube/config/config.json: no such file or directory
	I1002 06:41:19.747010  294362 out.go:368] Setting JSON to true
	I1002 06:41:19.747961  294362 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5031,"bootTime":1759382249,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 06:41:19.748070  294362 start.go:140] virtualization:  
	I1002 06:41:19.752401  294362 out.go:99] [download-only-954800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1002 06:41:19.752602  294362 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 06:41:19.752647  294362 notify.go:220] Checking for updates...
	I1002 06:41:19.755660  294362 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:41:19.758566  294362 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:41:19.761501  294362 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 06:41:19.764470  294362 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 06:41:19.767553  294362 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 06:41:19.773528  294362 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:41:19.773862  294362 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:41:19.806557  294362 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:41:19.806724  294362 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:41:19.865739  294362 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 06:41:19.856192876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:41:19.865843  294362 docker.go:318] overlay module found
	I1002 06:41:19.868837  294362 out.go:99] Using the docker driver based on user configuration
	I1002 06:41:19.868880  294362 start.go:304] selected driver: docker
	I1002 06:41:19.868893  294362 start.go:924] validating driver "docker" against <nil>
	I1002 06:41:19.869023  294362 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:41:19.934476  294362 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 06:41:19.925446573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:41:19.934656  294362 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:41:19.934958  294362 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 06:41:19.935150  294362 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:41:19.938232  294362 out.go:171] Using Docker driver with root privileges
	I1002 06:41:19.941383  294362 cni.go:84] Creating CNI manager for ""
	I1002 06:41:19.941459  294362 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 06:41:19.941478  294362 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:41:19.941576  294362 start.go:348] cluster config:
	{Name:download-only-954800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-954800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:41:19.944698  294362 out.go:99] Starting "download-only-954800" primary control-plane node in "download-only-954800" cluster
	I1002 06:41:19.944730  294362 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 06:41:19.947675  294362 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:41:19.947739  294362 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 06:41:19.947814  294362 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:41:19.966672  294362 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:41:19.966883  294362 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:41:19.966982  294362 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:41:20.009000  294362 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 06:41:20.009035  294362 cache.go:58] Caching tarball of preloaded images
	I1002 06:41:20.009238  294362 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 06:41:20.013563  294362 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 06:41:20.013619  294362 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1002 06:41:20.110624  294362 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1002 06:41:20.110801  294362 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 06:41:24.212368  294362 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1002 06:41:24.212764  294362 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/download-only-954800/config.json ...
	I1002 06:41:24.212802  294362 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/download-only-954800/config.json: {Name:mk4f3b7344d831ddb4b4abf5fcd90dafad652a06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:41:24.212998  294362 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 06:41:24.213202  294362 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21643-292504/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-954800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-954800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-954800
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-378847 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-378847 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.554889179s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 06:41:30.985590  294357 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 06:41:30.985629  294357 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-292504/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-378847
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-378847: exit status 85 (154.174506ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-954800 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-954800 │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ delete  │ -p download-only-954800                                                                                                                                                   │ download-only-954800 │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │ 02 Oct 25 06:41 UTC │
	│ start   │ -o=json --download-only -p download-only-378847 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-378847 │ jenkins │ v1.37.0 │ 02 Oct 25 06:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:41:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:41:26.476909  294565 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:41:26.477069  294565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:26.477081  294565 out.go:374] Setting ErrFile to fd 2...
	I1002 06:41:26.477086  294565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:41:26.477367  294565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 06:41:26.477807  294565 out.go:368] Setting JSON to true
	I1002 06:41:26.478647  294565 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5038,"bootTime":1759382249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 06:41:26.478754  294565 start.go:140] virtualization:  
	I1002 06:41:26.482117  294565 out.go:99] [download-only-378847] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:41:26.482332  294565 notify.go:220] Checking for updates...
	I1002 06:41:26.485206  294565 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:41:26.488362  294565 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:41:26.491250  294565 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 06:41:26.494204  294565 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 06:41:26.497067  294565 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 06:41:26.502878  294565 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:41:26.503215  294565 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:41:26.524641  294565 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:41:26.524762  294565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:41:26.580990  294565 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 06:41:26.571942254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:41:26.581099  294565 docker.go:318] overlay module found
	I1002 06:41:26.584051  294565 out.go:99] Using the docker driver based on user configuration
	I1002 06:41:26.584088  294565 start.go:304] selected driver: docker
	I1002 06:41:26.584094  294565 start.go:924] validating driver "docker" against <nil>
	I1002 06:41:26.584211  294565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:41:26.636933  294565 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 06:41:26.627732268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:41:26.637105  294565 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:41:26.637392  294565 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 06:41:26.637552  294565 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:41:26.640799  294565 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-378847 host does not exist
	  To start a cluster, run: "minikube start -p download-only-378847"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-378847
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 06:41:32.812741  294357 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-242470 --alsologtostderr --binary-mirror http://127.0.0.1:34303 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-242470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-242470
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-067378
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-067378: exit status 85 (80.9466ms)

                                                
                                                
-- stdout --
	* Profile "addons-067378" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-067378"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-067378
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-067378: exit status 85 (67.768738ms)

                                                
                                                
-- stdout --
	* Profile "addons-067378" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-067378"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (174.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-067378 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-067378 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m54.607396953s)
--- PASS: TestAddons/Setup (174.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-067378 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-067378 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.78s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-067378 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-067378 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [387d8c6e-6b3d-4b66-b3f5-f1a69445358e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [387d8c6e-6b3d-4b66-b3f5-f1a69445358e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004077177s
addons_test.go:694: (dbg) Run:  kubectl --context addons-067378 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-067378 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-067378 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-067378 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-067378
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-067378: (11.913729186s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-067378
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-067378
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-067378
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (39.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-654417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-654417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.765896444s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-654417 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-654417 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-654417 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-654417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-654417
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-654417: (1.994398459s)
--- PASS: TestCertOptions (39.51s)

                                                
                                    
x
+
TestCertExpiration (335.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-759246 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1002 07:56:24.335219  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:56:41.264374  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-759246 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.383307598s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-759246 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-759246 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m55.986262614s)
helpers_test.go:175: Cleaning up "cert-expiration-759246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-759246
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-759246: (3.840830829s)
--- PASS: TestCertExpiration (335.21s)

                                                
                                    
x
+
TestErrorSpam/setup (33.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-126180 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-126180 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-126180 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-126180 --driver=docker  --container-runtime=crio: (33.057784006s)
--- PASS: TestErrorSpam/setup (33.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (6.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 pause: exit status 80 (2.217941248s)

                                                
                                                
-- stdout --
	* Pausing node nospam-126180 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:48:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 pause: exit status 80 (2.263025373s)

                                                
                                                
-- stdout --
	* Pausing node nospam-126180 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:48:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 pause: exit status 80 (2.456377458s)

                                                
                                                
-- stdout --
	* Pausing node nospam-126180 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:48:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 unpause: exit status 80 (2.058009661s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-126180 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:48:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 unpause: exit status 80 (1.755854143s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-126180 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:48:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 unpause: exit status 80 (2.015142628s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-126180 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T06:48:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 stop: (1.230517016s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-126180 --log_dir /tmp/nospam-126180 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21643-292504/.minikube/files/etc/test/nested/copy/294357/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-615837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1002 06:49:28.909393  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:28.915927  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:28.927352  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:28.948758  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:28.990255  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:29.071673  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:29.233184  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:29.554752  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:30.196520  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:31.478384  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:34.041029  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:39.163064  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:49:49.405056  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-615837 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.712966673s)
--- PASS: TestFunctional/serial/StartWithProxy (80.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 06:50:03.312747  294357 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-615837 --alsologtostderr -v=8
E1002 06:50:09.887016  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-615837 --alsologtostderr -v=8: (28.991930473s)
functional_test.go:678: soft start took 28.994840476s for "functional-615837" cluster.
I1002 06:50:32.305055  294357 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-615837 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-615837 cache add registry.k8s.io/pause:3.1: (1.158435948s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-615837 cache add registry.k8s.io/pause:3.3: (1.187820936s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-615837 cache add registry.k8s.io/pause:latest: (1.11777975s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-615837 /tmp/TestFunctionalserialCacheCmdcacheadd_local3355975396/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 cache add minikube-local-cache-test:functional-615837
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 cache delete minikube-local-cache-test:functional-615837
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-615837
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.241636ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 kubectl -- --context functional-615837 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-615837 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-615837 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 06:50:50.849338  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-615837 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.471977535s)
functional_test.go:776: restart took 49.472071328s for "functional-615837" cluster.
I1002 06:51:29.134733  294357 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (49.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-615837 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-615837 logs: (1.522116282s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 logs --file /tmp/TestFunctionalserialLogsFileCmd2831603441/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-615837 logs --file /tmp/TestFunctionalserialLogsFileCmd2831603441/001/logs.txt: (1.483413613s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-615837 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-615837
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-615837: exit status 115 (403.119747ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30363 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-615837 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 config get cpus: exit status 14 (84.227575ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 config get cpus: exit status 14 (69.435481ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-615837 --alsologtostderr -v=1]
2025/10/02 07:02:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-615837 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 322087: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-615837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-615837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (210.001184ms)

                                                
                                                
-- stdout --
	* [functional-615837] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:02:01.599921  321793 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:02:01.600126  321793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:02:01.600152  321793 out.go:374] Setting ErrFile to fd 2...
	I1002 07:02:01.600172  321793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:02:01.600463  321793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:02:01.600965  321793 out.go:368] Setting JSON to false
	I1002 07:02:01.601946  321793 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6273,"bootTime":1759382249,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:02:01.602046  321793 start.go:140] virtualization:  
	I1002 07:02:01.607885  321793 out.go:179] * [functional-615837] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:02:01.610987  321793 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:02:01.611022  321793 notify.go:220] Checking for updates...
	I1002 07:02:01.614092  321793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:02:01.617186  321793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:02:01.620254  321793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:02:01.623165  321793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:02:01.626050  321793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:02:01.629510  321793 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:02:01.630123  321793 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:02:01.662873  321793 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:02:01.663018  321793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:02:01.739428  321793 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:02:01.729127484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:02:01.739538  321793 docker.go:318] overlay module found
	I1002 07:02:01.742745  321793 out.go:179] * Using the docker driver based on existing profile
	I1002 07:02:01.745710  321793 start.go:304] selected driver: docker
	I1002 07:02:01.745736  321793 start.go:924] validating driver "docker" against &{Name:functional-615837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-615837 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:02:01.745842  321793 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:02:01.749409  321793 out.go:203] 
	W1002 07:02:01.752208  321793 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 07:02:01.755091  321793 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-615837 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-615837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-615837 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (226.781212ms)

                                                
                                                
-- stdout --
	* [functional-615837] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:02:02.087228  321911 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:02:02.087411  321911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:02:02.087434  321911 out.go:374] Setting ErrFile to fd 2...
	I1002 07:02:02.087454  321911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:02:02.088497  321911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:02:02.088963  321911 out.go:368] Setting JSON to false
	I1002 07:02:02.089952  321911 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6273,"bootTime":1759382249,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:02:02.090065  321911 start.go:140] virtualization:  
	I1002 07:02:02.093472  321911 out.go:179] * [functional-615837] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 07:02:02.096729  321911 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:02:02.096774  321911 notify.go:220] Checking for updates...
	I1002 07:02:02.102843  321911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:02:02.105883  321911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:02:02.108774  321911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:02:02.111627  321911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:02:02.115180  321911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:02:02.118496  321911 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:02:02.119047  321911 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:02:02.160089  321911 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:02:02.160250  321911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:02:02.225359  321911 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:02:02.214995625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:02:02.225477  321911 docker.go:318] overlay module found
	I1002 07:02:02.228526  321911 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 07:02:02.231295  321911 start.go:304] selected driver: docker
	I1002 07:02:02.231320  321911 start.go:924] validating driver "docker" against &{Name:functional-615837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-615837 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:02:02.231428  321911 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:02:02.235066  321911 out.go:203] 
	W1002 07:02:02.238004  321911 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 07:02:02.240818  321911 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c7cfbc65-535e-4329-a245-4b87be29f553] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002960594s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-615837 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-615837 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-615837 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-615837 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8cfc3b6d-dec4-4d86-8b3a-a59a8a09d168] Pending
helpers_test.go:352: "sp-pod" [8cfc3b6d-dec4-4d86-8b3a-a59a8a09d168] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8cfc3b6d-dec4-4d86-8b3a-a59a8a09d168] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003370673s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-615837 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-615837 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-615837 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8f0d0405-bbbc-45d0-9ba9-2272a1f88aa1] Pending
helpers_test.go:352: "sp-pod" [8f0d0405-bbbc-45d0-9ba9-2272a1f88aa1] Running
E1002 06:52:12.770868  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003518198s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-615837 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh -n functional-615837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 cp functional-615837:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1246834115/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh -n functional-615837 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh -n functional-615837 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/294357/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo cat /etc/test/nested/copy/294357/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/294357.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo cat /etc/ssl/certs/294357.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/294357.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo cat /usr/share/ca-certificates/294357.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2943572.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo cat /etc/ssl/certs/2943572.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2943572.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo cat /usr/share/ca-certificates/2943572.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-615837 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 ssh "sudo systemctl is-active docker": exit status 1 (358.004613ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 ssh "sudo systemctl is-active containerd": exit status 1 (367.996625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-615837 version -o=json --components: (1.19174777s)
--- PASS: TestFunctional/parallel/Version/components (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-615837 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-615837 image ls --format short --alsologtostderr:
I1002 07:02:11.588967  322454 out.go:360] Setting OutFile to fd 1 ...
I1002 07:02:11.589121  322454 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:11.589168  322454 out.go:374] Setting ErrFile to fd 2...
I1002 07:02:11.589182  322454 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:11.589505  322454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
I1002 07:02:11.590224  322454 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:11.590434  322454 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:11.591037  322454 cli_runner.go:164] Run: docker container inspect functional-615837 --format={{.State.Status}}
I1002 07:02:11.609911  322454 ssh_runner.go:195] Run: systemctl --version
I1002 07:02:11.609966  322454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-615837
I1002 07:02:11.629368  322454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/functional-615837/id_rsa Username:docker}
I1002 07:02:11.737975  322454 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-615837 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ localhost/my-image                      │ functional-615837  │ c23855ffdc7a9 │ 1.64MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ latest             │ 0777d15d89ece │ 202MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-615837 image ls --format table --alsologtostderr:
I1002 07:02:16.229010  322923 out.go:360] Setting OutFile to fd 1 ...
I1002 07:02:16.229468  322923 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:16.229777  322923 out.go:374] Setting ErrFile to fd 2...
I1002 07:02:16.229809  322923 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:16.230109  322923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
I1002 07:02:16.230821  322923 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:16.231008  322923 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:16.231549  322923 cli_runner.go:164] Run: docker container inspect functional-615837 --format={{.State.Status}}
I1002 07:02:16.252587  322923 ssh_runner.go:195] Run: systemctl --version
I1002 07:02:16.252649  322923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-615837
I1002 07:02:16.271341  322923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/functional-615837/id_rsa Username:docker}
I1002 07:02:16.369674  322923 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-615837 image ls --format json --alsologtostderr:
[{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b","repoDigests":["docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc","docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k
8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa
0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.
k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-pr
ovisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"c23855ffdc7a9183d8332147c84eeeb0f4f20a4659ff88f64b046213b900aca2","repoDigests":["localhost/my-image@sha256:521acd1f5a2448d0487b5f37168cc9ab663af2ed5eedfc42c1af9a3a4de52a0e"],"repoTags":["localhost/my-image:functional-615837"],"size":"1640791"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"aec6ed3284dc72180b5bf5ba563b4e92b2f251d637481a5b99c6a7d3be2cfc05","repoDigests":["docker.io/library/b0713352d44e8fe8b7c56a45a8f3ecd445cdefcdb8fc20c5ceb0117b6dc7ba1d-tmp@sha256:57954a16ba774bf1076ed9f3cdf7c0ff0aba6c0cbb93889ad3c11a3f066c7525"],"
repoTags":[],"size":"1638179"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a
200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-615837 image ls --format json --alsologtostderr:
I1002 07:02:15.986217  322885 out.go:360] Setting OutFile to fd 1 ...
I1002 07:02:15.986365  322885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:15.986377  322885 out.go:374] Setting ErrFile to fd 2...
I1002 07:02:15.986382  322885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:15.986671  322885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
I1002 07:02:15.987359  322885 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:15.987518  322885 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:15.988006  322885 cli_runner.go:164] Run: docker container inspect functional-615837 --format={{.State.Status}}
I1002 07:02:16.015890  322885 ssh_runner.go:195] Run: systemctl --version
I1002 07:02:16.015954  322885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-615837
I1002 07:02:16.034861  322885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/functional-615837/id_rsa Username:docker}
I1002 07:02:16.134148  322885 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-615837 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b
repoDigests:
- docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc
- docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: c23855ffdc7a9183d8332147c84eeeb0f4f20a4659ff88f64b046213b900aca2
repoDigests:
- localhost/my-image@sha256:521acd1f5a2448d0487b5f37168cc9ab663af2ed5eedfc42c1af9a3a4de52a0e
repoTags:
- localhost/my-image:functional-615837
size: "1640791"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: aec6ed3284dc72180b5bf5ba563b4e92b2f251d637481a5b99c6a7d3be2cfc05
repoDigests:
- docker.io/library/b0713352d44e8fe8b7c56a45a8f3ecd445cdefcdb8fc20c5ceb0117b6dc7ba1d-tmp@sha256:57954a16ba774bf1076ed9f3cdf7c0ff0aba6c0cbb93889ad3c11a3f066c7525
repoTags: []
size: "1638179"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-615837 image ls --format yaml --alsologtostderr:
I1002 07:02:15.753690  322849 out.go:360] Setting OutFile to fd 1 ...
I1002 07:02:15.753876  322849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:15.753907  322849 out.go:374] Setting ErrFile to fd 2...
I1002 07:02:15.753929  322849 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:15.754209  322849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
I1002 07:02:15.754827  322849 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:15.754992  322849 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:15.755502  322849 cli_runner.go:164] Run: docker container inspect functional-615837 --format={{.State.Status}}
I1002 07:02:15.773413  322849 ssh_runner.go:195] Run: systemctl --version
I1002 07:02:15.773461  322849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-615837
I1002 07:02:15.795615  322849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/functional-615837/id_rsa Username:docker}
I1002 07:02:15.895785  322849 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 ssh pgrep buildkitd: exit status 1 (281.896151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image build -t localhost/my-image:functional-615837 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-615837 image build -t localhost/my-image:functional-615837 testdata/build --alsologtostderr: (3.418147808s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-615837 image build -t localhost/my-image:functional-615837 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> aec6ed3284d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-615837
--> c23855ffdc7
Successfully tagged localhost/my-image:functional-615837
c23855ffdc7a9183d8332147c84eeeb0f4f20a4659ff88f64b046213b900aca2
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-615837 image build -t localhost/my-image:functional-615837 testdata/build --alsologtostderr:
I1002 07:02:12.113685  322552 out.go:360] Setting OutFile to fd 1 ...
I1002 07:02:12.114503  322552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:12.114542  322552 out.go:374] Setting ErrFile to fd 2...
I1002 07:02:12.114561  322552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:02:12.114884  322552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
I1002 07:02:12.115591  322552 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:12.116278  322552 config.go:182] Loaded profile config "functional-615837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 07:02:12.116819  322552 cli_runner.go:164] Run: docker container inspect functional-615837 --format={{.State.Status}}
I1002 07:02:12.133960  322552 ssh_runner.go:195] Run: systemctl --version
I1002 07:02:12.134017  322552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-615837
I1002 07:02:12.159588  322552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/functional-615837/id_rsa Username:docker}
I1002 07:02:12.253915  322552 build_images.go:161] Building image from path: /tmp/build.1412478959.tar
I1002 07:02:12.253990  322552 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 07:02:12.262400  322552 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1412478959.tar
I1002 07:02:12.266610  322552 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1412478959.tar: stat -c "%s %y" /var/lib/minikube/build/build.1412478959.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1412478959.tar': No such file or directory
I1002 07:02:12.266691  322552 ssh_runner.go:362] scp /tmp/build.1412478959.tar --> /var/lib/minikube/build/build.1412478959.tar (3072 bytes)
I1002 07:02:12.284691  322552 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1412478959
I1002 07:02:12.292962  322552 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1412478959 -xf /var/lib/minikube/build/build.1412478959.tar
I1002 07:02:12.301308  322552 crio.go:315] Building image: /var/lib/minikube/build/build.1412478959
I1002 07:02:12.301433  322552 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-615837 /var/lib/minikube/build/build.1412478959 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1002 07:02:15.448306  322552 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-615837 /var/lib/minikube/build/build.1412478959 --cgroup-manager=cgroupfs: (3.146839974s)
I1002 07:02:15.448385  322552 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1412478959
I1002 07:02:15.456542  322552 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1412478959.tar
I1002 07:02:15.464707  322552 build_images.go:217] Built localhost/my-image:functional-615837 from /tmp/build.1412478959.tar
I1002 07:02:15.464740  322552 build_images.go:133] succeeded building to: functional-615837
I1002 07:02:15.464746  322552 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-615837
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image rm kicbase/echo-server:functional-615837 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-615837 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-615837 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-615837 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 318160: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-615837 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-615837 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-615837 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f062ea09-ea62-4980-82fb-bc9b63d1beea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [f062ea09-ea62-4980-82fb-bc9b63d1beea] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003478262s
I1002 06:51:53.711574  294357 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-615837 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.191.198 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-615837 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 service list -o json
functional_test.go:1504: Took "524.182295ms" to run "out/minikube-linux-arm64 -p functional-615837 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "368.457856ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "61.607741ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "378.997699ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "52.17926ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdany-port3486724989/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759388509368093861" to /tmp/TestFunctionalparallelMountCmdany-port3486724989/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759388509368093861" to /tmp/TestFunctionalparallelMountCmdany-port3486724989/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759388509368093861" to /tmp/TestFunctionalparallelMountCmdany-port3486724989/001/test-1759388509368093861
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (355.21051ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 07:01:49.723578  294357 retry.go:31] will retry after 741.416863ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 07:01 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 07:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 07:01 test-1759388509368093861
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh cat /mount-9p/test-1759388509368093861
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-615837 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [60590501-6c50-42e9-991d-522e7aa90de2] Pending
helpers_test.go:352: "busybox-mount" [60590501-6c50-42e9-991d-522e7aa90de2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [60590501-6c50-42e9-991d-522e7aa90de2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [60590501-6c50-42e9-991d-522e7aa90de2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004375511s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-615837 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdany-port3486724989/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdspecific-port3943870708/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.503878ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 07:01:57.876694  294357 retry.go:31] will retry after 609.682619ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdspecific-port3943870708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 ssh "sudo umount -f /mount-9p": exit status 1 (282.567037ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-615837 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdspecific-port3943870708/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2398992057/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2398992057/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2398992057/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T" /mount1: exit status 1 (703.947119ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 07:02:00.243060  294357 retry.go:31] will retry after 392.222857ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-615837 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-615837 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2398992057/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2398992057/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-615837 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2398992057/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-615837
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-615837
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-615837
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 07:04:28.907758  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m24.273218258s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (205.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- rollout status deployment/busybox
E1002 07:05:51.974360  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 kubectl -- rollout status deployment/busybox: (3.576612582s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-gph4b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-q95k5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-wbl7l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-gph4b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-q95k5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-wbl7l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-gph4b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-q95k5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-wbl7l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-gph4b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-gph4b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-q95k5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-q95k5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-wbl7l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 kubectl -- exec busybox-7b57f96db7-wbl7l -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 node add --alsologtostderr -v 5
E1002 07:06:41.264523  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:41.271504  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:41.282887  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:41.304287  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:41.345686  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:41.427195  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:41.590557  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:41.912184  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:42.554247  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:43.835660  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:46.396994  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:06:51.519192  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 node add --alsologtostderr -v 5: (1m0.640875039s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5: (1.078342342s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-550225 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.321019135s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 status --output json --alsologtostderr -v 5
E1002 07:07:01.761257  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 status --output json --alsologtostderr -v 5: (1.005388894s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp testdata/cp-test.txt ha-550225:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225_ha-550225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m02 "sudo cat /home/docker/cp-test_ha-550225_ha-550225-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225_ha-550225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m03 "sudo cat /home/docker/cp-test_ha-550225_ha-550225-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225:/home/docker/cp-test.txt ha-550225-m04:/home/docker/cp-test_ha-550225_ha-550225-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m04 "sudo cat /home/docker/cp-test_ha-550225_ha-550225-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp testdata/cp-test.txt ha-550225-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m02:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m02_ha-550225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225 "sudo cat /home/docker/cp-test_ha-550225-m02_ha-550225.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m02:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225-m02_ha-550225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m03 "sudo cat /home/docker/cp-test_ha-550225-m02_ha-550225-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m02:/home/docker/cp-test.txt ha-550225-m04:/home/docker/cp-test_ha-550225-m02_ha-550225-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m04 "sudo cat /home/docker/cp-test_ha-550225-m02_ha-550225-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp testdata/cp-test.txt ha-550225-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m03_ha-550225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225 "sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m03_ha-550225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m02 "sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m03:/home/docker/cp-test.txt ha-550225-m04:/home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m04 "sudo cat /home/docker/cp-test_ha-550225-m03_ha-550225-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp testdata/cp-test.txt ha-550225-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216719830/001/cp-test_ha-550225-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225:/home/docker/cp-test_ha-550225-m04_ha-550225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225 "sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m02:/home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m02 "sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 cp ha-550225-m04:/home/docker/cp-test.txt ha-550225-m03:/home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 ssh -n ha-550225-m03 "sudo cat /home/docker/cp-test_ha-550225-m04_ha-550225-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 node stop m02 --alsologtostderr -v 5
E1002 07:07:22.242586  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 node stop m02 --alsologtostderr -v 5: (11.948860911s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5: exit status 7 (787.672054ms)

                                                
                                                
-- stdout --
	ha-550225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-550225-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-550225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-550225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:07:32.618149  337978 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:07:32.618395  337978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:07:32.618437  337978 out.go:374] Setting ErrFile to fd 2...
	I1002 07:07:32.618457  337978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:07:32.618798  337978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:07:32.619030  337978 out.go:368] Setting JSON to false
	I1002 07:07:32.619148  337978 mustload.go:65] Loading cluster: ha-550225
	I1002 07:07:32.619239  337978 notify.go:220] Checking for updates...
	I1002 07:07:32.619605  337978 config.go:182] Loaded profile config "ha-550225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:07:32.619643  337978 status.go:174] checking status of ha-550225 ...
	I1002 07:07:32.620497  337978 cli_runner.go:164] Run: docker container inspect ha-550225 --format={{.State.Status}}
	I1002 07:07:32.641979  337978 status.go:371] ha-550225 host status = "Running" (err=<nil>)
	I1002 07:07:32.642001  337978 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:07:32.642305  337978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225
	I1002 07:07:32.669995  337978 host.go:66] Checking if "ha-550225" exists ...
	I1002 07:07:32.670287  337978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:07:32.670324  337978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225
	I1002 07:07:32.691630  337978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225/id_rsa Username:docker}
	I1002 07:07:32.789227  337978 ssh_runner.go:195] Run: systemctl --version
	I1002 07:07:32.795908  337978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:07:32.809108  337978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:07:32.906567  337978 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-02 07:07:32.896368188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:07:32.907266  337978 kubeconfig.go:125] found "ha-550225" server: "https://192.168.49.254:8443"
	I1002 07:07:32.907316  337978 api_server.go:166] Checking apiserver status ...
	I1002 07:07:32.907372  337978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:07:32.919576  337978 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup
	I1002 07:07:32.928669  337978 api_server.go:182] apiserver freezer: "6:freezer:/docker/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/crio/crio-80e6713dd03deb0819c17b7c20ac663bb001a016fb7706a5a97f8456e1ab2766"
	I1002 07:07:32.928755  337978 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1c1f8ec53310b472f6a526643d5bdbdcc50d29a82373d035d7a66a0a7ef7e69c/crio/crio-80e6713dd03deb0819c17b7c20ac663bb001a016fb7706a5a97f8456e1ab2766/freezer.state
	I1002 07:07:32.937903  337978 api_server.go:204] freezer state: "THAWED"
	I1002 07:07:32.937939  337978 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 07:07:32.946619  337978 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 07:07:32.946644  337978 status.go:463] ha-550225 apiserver status = Running (err=<nil>)
	I1002 07:07:32.946655  337978 status.go:176] ha-550225 status: &{Name:ha-550225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:07:32.946684  337978 status.go:174] checking status of ha-550225-m02 ...
	I1002 07:07:32.947014  337978 cli_runner.go:164] Run: docker container inspect ha-550225-m02 --format={{.State.Status}}
	I1002 07:07:32.964707  337978 status.go:371] ha-550225-m02 host status = "Stopped" (err=<nil>)
	I1002 07:07:32.964775  337978 status.go:384] host is not running, skipping remaining checks
	I1002 07:07:32.964782  337978 status.go:176] ha-550225-m02 status: &{Name:ha-550225-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:07:32.964804  337978 status.go:174] checking status of ha-550225-m03 ...
	I1002 07:07:32.965131  337978 cli_runner.go:164] Run: docker container inspect ha-550225-m03 --format={{.State.Status}}
	I1002 07:07:32.982539  337978 status.go:371] ha-550225-m03 host status = "Running" (err=<nil>)
	I1002 07:07:32.982567  337978 host.go:66] Checking if "ha-550225-m03" exists ...
	I1002 07:07:32.982878  337978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m03
	I1002 07:07:33.014868  337978 host.go:66] Checking if "ha-550225-m03" exists ...
	I1002 07:07:33.015281  337978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:07:33.015414  337978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m03
	I1002 07:07:33.034667  337978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m03/id_rsa Username:docker}
	I1002 07:07:33.133659  337978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:07:33.148510  337978 kubeconfig.go:125] found "ha-550225" server: "https://192.168.49.254:8443"
	I1002 07:07:33.148537  337978 api_server.go:166] Checking apiserver status ...
	I1002 07:07:33.148578  337978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:07:33.160565  337978 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	I1002 07:07:33.169120  337978 api_server.go:182] apiserver freezer: "6:freezer:/docker/0d643c7e5dc7897f4c139636230035e12b046e19f60a2263f9e48913b339861a/crio/crio-f905ad58fe36ee412f328305346ed857b516764959ec2d4f4064fe1d620ee945"
	I1002 07:07:33.169194  337978 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0d643c7e5dc7897f4c139636230035e12b046e19f60a2263f9e48913b339861a/crio/crio-f905ad58fe36ee412f328305346ed857b516764959ec2d4f4064fe1d620ee945/freezer.state
	I1002 07:07:33.177381  337978 api_server.go:204] freezer state: "THAWED"
	I1002 07:07:33.177408  337978 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 07:07:33.185526  337978 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 07:07:33.185565  337978 status.go:463] ha-550225-m03 apiserver status = Running (err=<nil>)
	I1002 07:07:33.185575  337978 status.go:176] ha-550225-m03 status: &{Name:ha-550225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:07:33.185596  337978 status.go:174] checking status of ha-550225-m04 ...
	I1002 07:07:33.185899  337978 cli_runner.go:164] Run: docker container inspect ha-550225-m04 --format={{.State.Status}}
	I1002 07:07:33.203859  337978 status.go:371] ha-550225-m04 host status = "Running" (err=<nil>)
	I1002 07:07:33.203884  337978 host.go:66] Checking if "ha-550225-m04" exists ...
	I1002 07:07:33.204189  337978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-550225-m04
	I1002 07:07:33.225448  337978 host.go:66] Checking if "ha-550225-m04" exists ...
	I1002 07:07:33.225745  337978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:07:33.225784  337978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-550225-m04
	I1002 07:07:33.243613  337978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/ha-550225-m04/id_rsa Username:docker}
	I1002 07:07:33.336559  337978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:07:33.350401  337978 status.go:176] ha-550225-m04 status: &{Name:ha-550225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 node start m02 --alsologtostderr -v 5
E1002 07:08:03.204336  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 node start m02 --alsologtostderr -v 5: (30.82032241s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-550225 status --alsologtostderr -v 5: (1.406670622s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.311746392s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-117474 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-117474 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m25.822633343s)
--- PASS: TestJSONOutput/start/Command (85.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-117474 --output=json --user=testUser
E1002 07:26:41.265598  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-117474 --output=json --user=testUser: (5.713236507s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-739369 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-739369 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.937494ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d3ceebfd-99f5-428d-9057-ec80eebda5db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-739369] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6222fb69-b8f8-49dd-b1bf-351b29b57a98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"d24522ad-0b8b-4d32-bb41-1178ff12a437","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8a4ba30f-02f2-4b1e-87f3-de3d258aee65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig"}}
	{"specversion":"1.0","id":"838d6ebf-9d3d-4567-885f-39cef473a213","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube"}}
	{"specversion":"1.0","id":"3efe073c-6b13-46e2-9b87-3735ca678ba7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"16ad66f0-7205-49ba-8704-873e4b1cca44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f47b4362-5500-4d17-9e89-20c2a90b2088","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-739369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-739369
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-947757 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-947757 --network=: (38.700995864s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-947757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-947757
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-947757: (2.152484376s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.89s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-077715 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-077715 --network=bridge: (36.286641512s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-077715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-077715
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-077715: (2.038342506s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.35s)

                                                
                                    
x
+
TestKicExistingNetwork (37.56s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 07:28:08.670934  294357 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 07:28:08.686872  294357 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 07:28:08.686944  294357 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 07:28:08.686965  294357 cli_runner.go:164] Run: docker network inspect existing-network
W1002 07:28:08.708627  294357 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 07:28:08.708656  294357 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 07:28:08.708672  294357 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 07:28:08.708777  294357 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 07:28:08.725706  294357 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-87a294cab4b5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:36:50:ad:a1:2a:88} reservation:<nil>}
I1002 07:28:08.725997  294357 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ab9310}
I1002 07:28:08.726015  294357 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 07:28:08.726063  294357 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 07:28:08.789448  294357 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-468260 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-468260 --network=existing-network: (35.356808451s)
helpers_test.go:175: Cleaning up "existing-network-468260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-468260
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-468260: (2.055338585s)
I1002 07:28:46.217282  294357 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.56s)

                                                
                                    
x
+
TestKicCustomSubnet (38.33s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-687606 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-687606 --subnet=192.168.60.0/24: (36.238929512s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-687606 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-687606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-687606
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-687606: (2.069977756s)
--- PASS: TestKicCustomSubnet (38.33s)

                                                
                                    
x
+
TestKicStaticIP (39.01s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-883803 --static-ip=192.168.200.200
E1002 07:29:28.908130  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-883803 --static-ip=192.168.200.200: (36.681340191s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-883803 ip
helpers_test.go:175: Cleaning up "static-ip-883803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-883803
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-883803: (2.123603792s)
--- PASS: TestKicStaticIP (39.01s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (76.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-901294 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-901294 --driver=docker  --container-runtime=crio: (36.069616861s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-903879 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-903879 --driver=docker  --container-runtime=crio: (35.025515352s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-901294
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-903879
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-903879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-903879
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-903879: (1.970504696s)
helpers_test.go:175: Cleaning up "first-901294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-901294
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-901294: (1.967564811s)
--- PASS: TestMinikubeProfile (76.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-377105 --memory=3072 --mount-string /tmp/TestMountStartserial306161060/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-377105 --memory=3072 --mount-string /tmp/TestMountStartserial306161060/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.645546658s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-377105 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-378779 --memory=3072 --mount-string /tmp/TestMountStartserial306161060/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-378779 --memory=3072 --mount-string /tmp/TestMountStartserial306161060/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.863499671s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-378779 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-377105 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-377105 --alsologtostderr -v=5: (1.625969975s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-378779 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-378779
E1002 07:31:41.264400  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-378779: (1.207338737s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-378779
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-378779: (6.828494928s)
--- PASS: TestMountStart/serial/RestartStopped (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-378779 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (143.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-339784 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-339784 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m23.07783187s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (143.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-339784 -- rollout status deployment/busybox: (3.272276436s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-d62tn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-wv2rr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-d62tn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-wv2rr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-d62tn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-wv2rr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-d62tn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-d62tn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-wv2rr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-339784 -- exec busybox-7b57f96db7-wv2rr -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-339784 -v=5 --alsologtostderr
E1002 07:34:28.907860  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-339784 -v=5 --alsologtostderr: (58.926828025s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-339784 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp testdata/cp-test.txt multinode-339784:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp multinode-339784:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1497634110/001/cp-test_multinode-339784.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp multinode-339784:/home/docker/cp-test.txt multinode-339784-m02:/home/docker/cp-test_multinode-339784_multinode-339784-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m02 "sudo cat /home/docker/cp-test_multinode-339784_multinode-339784-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp multinode-339784:/home/docker/cp-test.txt multinode-339784-m03:/home/docker/cp-test_multinode-339784_multinode-339784-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m03 "sudo cat /home/docker/cp-test_multinode-339784_multinode-339784-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp testdata/cp-test.txt multinode-339784-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp multinode-339784-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1497634110/001/cp-test_multinode-339784-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp multinode-339784-m02:/home/docker/cp-test.txt multinode-339784:/home/docker/cp-test_multinode-339784-m02_multinode-339784.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784 "sudo cat /home/docker/cp-test_multinode-339784-m02_multinode-339784.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp multinode-339784-m02:/home/docker/cp-test.txt multinode-339784-m03:/home/docker/cp-test_multinode-339784-m02_multinode-339784-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m03 "sudo cat /home/docker/cp-test_multinode-339784-m02_multinode-339784-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp testdata/cp-test.txt multinode-339784-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp multinode-339784-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1497634110/001/cp-test_multinode-339784-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp multinode-339784-m03:/home/docker/cp-test.txt multinode-339784:/home/docker/cp-test_multinode-339784-m03_multinode-339784.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784 "sudo cat /home/docker/cp-test_multinode-339784-m03_multinode-339784.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 cp multinode-339784-m03:/home/docker/cp-test.txt multinode-339784-m02:/home/docker/cp-test_multinode-339784-m03_multinode-339784-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 ssh -n multinode-339784-m02 "sudo cat /home/docker/cp-test_multinode-339784-m03_multinode-339784-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-339784 node stop m03: (1.221534869s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-339784 status: exit status 7 (535.933684ms)

                                                
                                                
-- stdout --
	multinode-339784
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-339784-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-339784-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-339784 status --alsologtostderr: exit status 7 (539.559463ms)

                                                
                                                
-- stdout --
	multinode-339784
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-339784-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-339784-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:35:33.633042  406966 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:35:33.633381  406966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:35:33.633391  406966 out.go:374] Setting ErrFile to fd 2...
	I1002 07:35:33.633396  406966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:35:33.633686  406966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:35:33.633875  406966 out.go:368] Setting JSON to false
	I1002 07:35:33.633898  406966 mustload.go:65] Loading cluster: multinode-339784
	I1002 07:35:33.634308  406966 config.go:182] Loaded profile config "multinode-339784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:35:33.634320  406966 status.go:174] checking status of multinode-339784 ...
	I1002 07:35:33.634847  406966 cli_runner.go:164] Run: docker container inspect multinode-339784 --format={{.State.Status}}
	I1002 07:35:33.635475  406966 notify.go:220] Checking for updates...
	I1002 07:35:33.653389  406966 status.go:371] multinode-339784 host status = "Running" (err=<nil>)
	I1002 07:35:33.653411  406966 host.go:66] Checking if "multinode-339784" exists ...
	I1002 07:35:33.653731  406966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-339784
	I1002 07:35:33.682843  406966 host.go:66] Checking if "multinode-339784" exists ...
	I1002 07:35:33.683229  406966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:35:33.683306  406966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-339784
	I1002 07:35:33.711045  406966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33253 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/multinode-339784/id_rsa Username:docker}
	I1002 07:35:33.806914  406966 ssh_runner.go:195] Run: systemctl --version
	I1002 07:35:33.813959  406966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:35:33.827375  406966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:35:33.883543  406966 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:35:33.873419934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:35:33.884094  406966 kubeconfig.go:125] found "multinode-339784" server: "https://192.168.67.2:8443"
	I1002 07:35:33.884133  406966 api_server.go:166] Checking apiserver status ...
	I1002 07:35:33.884178  406966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:35:33.895722  406966 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1269/cgroup
	I1002 07:35:33.904240  406966 api_server.go:182] apiserver freezer: "6:freezer:/docker/1abe60654b6dcc50a3b0d45036a8d375352a26eb59bbdfadc83427780c1dfcf0/crio/crio-da5e3b655efdbc6ab00827b8d58ba968b9027b9d18cd616587bda4d755b8a3e3"
	I1002 07:35:33.904308  406966 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1abe60654b6dcc50a3b0d45036a8d375352a26eb59bbdfadc83427780c1dfcf0/crio/crio-da5e3b655efdbc6ab00827b8d58ba968b9027b9d18cd616587bda4d755b8a3e3/freezer.state
	I1002 07:35:33.912325  406966 api_server.go:204] freezer state: "THAWED"
	I1002 07:35:33.912361  406966 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 07:35:33.920656  406966 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 07:35:33.920685  406966 status.go:463] multinode-339784 apiserver status = Running (err=<nil>)
	I1002 07:35:33.920696  406966 status.go:176] multinode-339784 status: &{Name:multinode-339784 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:35:33.920713  406966 status.go:174] checking status of multinode-339784-m02 ...
	I1002 07:35:33.921014  406966 cli_runner.go:164] Run: docker container inspect multinode-339784-m02 --format={{.State.Status}}
	I1002 07:35:33.940228  406966 status.go:371] multinode-339784-m02 host status = "Running" (err=<nil>)
	I1002 07:35:33.940255  406966 host.go:66] Checking if "multinode-339784-m02" exists ...
	I1002 07:35:33.940607  406966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-339784-m02
	I1002 07:35:33.957863  406966 host.go:66] Checking if "multinode-339784-m02" exists ...
	I1002 07:35:33.958219  406966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:35:33.958272  406966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-339784-m02
	I1002 07:35:33.976213  406966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/21643-292504/.minikube/machines/multinode-339784-m02/id_rsa Username:docker}
	I1002 07:35:34.076853  406966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:35:34.090649  406966 status.go:176] multinode-339784-m02 status: &{Name:multinode-339784-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:35:34.090686  406966 status.go:174] checking status of multinode-339784-m03 ...
	I1002 07:35:34.091049  406966 cli_runner.go:164] Run: docker container inspect multinode-339784-m03 --format={{.State.Status}}
	I1002 07:35:34.108173  406966 status.go:371] multinode-339784-m03 host status = "Stopped" (err=<nil>)
	I1002 07:35:34.108208  406966 status.go:384] host is not running, skipping remaining checks
	I1002 07:35:34.108215  406966 status.go:176] multinode-339784-m03 status: &{Name:multinode-339784-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-339784 node start m03 -v=5 --alsologtostderr: (7.34369653s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-339784
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-339784
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-339784: (24.768404969s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-339784 --wait=true -v=5 --alsologtostderr
E1002 07:36:41.263897  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-339784 --wait=true -v=5 --alsologtostderr: (50.020361623s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-339784
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-339784 node delete m03: (4.935813341s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-339784 stop: (23.547184555s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-339784 status: exit status 7 (86.789604ms)

                                                
                                                
-- stdout --
	multinode-339784
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-339784-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-339784 status --alsologtostderr: exit status 7 (106.378881ms)

                                                
                                                
-- stdout --
	multinode-339784
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-339784-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:37:26.511892  414699 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:37:26.512086  414699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:37:26.512112  414699 out.go:374] Setting ErrFile to fd 2...
	I1002 07:37:26.512134  414699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:37:26.512435  414699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:37:26.512678  414699 out.go:368] Setting JSON to false
	I1002 07:37:26.512755  414699 mustload.go:65] Loading cluster: multinode-339784
	I1002 07:37:26.512814  414699 notify.go:220] Checking for updates...
	I1002 07:37:26.513804  414699 config.go:182] Loaded profile config "multinode-339784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:37:26.513829  414699 status.go:174] checking status of multinode-339784 ...
	I1002 07:37:26.514448  414699 cli_runner.go:164] Run: docker container inspect multinode-339784 --format={{.State.Status}}
	I1002 07:37:26.533390  414699 status.go:371] multinode-339784 host status = "Stopped" (err=<nil>)
	I1002 07:37:26.533411  414699 status.go:384] host is not running, skipping remaining checks
	I1002 07:37:26.533418  414699 status.go:176] multinode-339784 status: &{Name:multinode-339784 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:37:26.533448  414699 status.go:174] checking status of multinode-339784-m02 ...
	I1002 07:37:26.533986  414699 cli_runner.go:164] Run: docker container inspect multinode-339784-m02 --format={{.State.Status}}
	I1002 07:37:26.566799  414699 status.go:371] multinode-339784-m02 host status = "Stopped" (err=<nil>)
	I1002 07:37:26.566826  414699 status.go:384] host is not running, skipping remaining checks
	I1002 07:37:26.566852  414699 status.go:176] multinode-339784-m02 status: &{Name:multinode-339784-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-339784 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-339784 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.834216766s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-339784 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-339784
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-339784-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-339784-m02 --driver=docker  --container-runtime=crio: exit status 14 (91.879563ms)

                                                
                                                
-- stdout --
	* [multinode-339784-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-339784-m02' is duplicated with machine name 'multinode-339784-m02' in profile 'multinode-339784'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-339784-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-339784-m03 --driver=docker  --container-runtime=crio: (33.096940483s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-339784
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-339784: exit status 80 (356.300804ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-339784 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-339784-m03 already exists in multinode-339784-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-339784-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-339784-m03: (1.992303076s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.59s)

                                                
                                    
x
+
TestPreload (130.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-897040 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1002 07:39:11.977431  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:39:28.913474  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:39:44.332880  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-897040 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.894479331s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-897040 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-897040 image pull gcr.io/k8s-minikube/busybox: (2.235730351s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-897040
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-897040: (5.794456321s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-897040 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-897040 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (56.898155022s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-897040 image list
helpers_test.go:175: Cleaning up "test-preload-897040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-897040
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-897040: (2.393250694s)
--- PASS: TestPreload (130.46s)

                                                
                                    
x
+
TestInsufficientStorage (11.09s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-030815 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1002 07:41:41.264360  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-030815 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.597624725s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"86098184-ad4d-4219-b145-a19e7d77fa8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-030815] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"33fd2b71-58d0-44c2-a0b1-cfd9c73bb4ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"9bfaa075-0573-46eb-905f-99dda06ab7a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b779db63-1e3b-4d3e-926e-5724a43db03f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig"}}
	{"specversion":"1.0","id":"9875586f-14bc-4c6e-81ce-2524249b2170","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube"}}
	{"specversion":"1.0","id":"b5051c07-29ec-423b-8d10-4d502980ccbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8791de7f-89f8-4204-b4f8-7fbcdbd4e711","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4b997f64-6279-4cac-8135-8095645b70d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1b25f4d5-75ee-4495-ac14-50cb77229871","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8cfbbcd9-f9be-4d58-9887-4768b68c9df9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac6e5f96-9359-4d42-997c-d7ce5e86734a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0092beb2-3c62-4e82-9c5e-af89e431437a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-030815\" primary control-plane node in \"insufficient-storage-030815\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5974a90-ae4c-4dc3-bf1b-fe61f40fc0dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5af3f9a7-d9af-4155-a65e-50e4c377cca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"84732584-c9c7-4fd9-9b56-9fc7b5fd873c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-030815 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-030815 --output=json --layout=cluster: exit status 7 (297.585085ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-030815","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-030815","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:41:46.949900  430711 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-030815" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-030815 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-030815 --output=json --layout=cluster: exit status 7 (294.415191ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-030815","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-030815","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:41:47.245679  430779 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-030815" does not appear in /home/jenkins/minikube-integration/21643-292504/kubeconfig
	E1002 07:41:47.255557  430779 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/insufficient-storage-030815/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-030815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-030815
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-030815: (1.897334946s)
--- PASS: TestInsufficientStorage (11.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3963611896 start -p running-upgrade-838161 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3963611896 start -p running-upgrade-838161 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.229603082s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-838161 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-838161 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.949207354s)
helpers_test.go:175: Cleaning up "running-upgrade-838161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-838161
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-838161: (2.019894591s)
--- PASS: TestRunningBinaryUpgrade (54.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (355.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.181483911s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-011391
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-011391: (1.318264462s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-011391 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-011391 status --format={{.Host}}: exit status 7 (105.008335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m35.229828833s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-011391 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (103.972536ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-011391] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-011391
	    minikube start -p kubernetes-upgrade-011391 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0113912 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-011391 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-011391 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.802100877s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-011391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-011391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-011391: (1.968856476s)
--- PASS: TestKubernetesUpgrade (355.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (112.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2478632783 start -p missing-upgrade-857609 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2478632783 start -p missing-upgrade-857609 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.744024007s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-857609
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-857609
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-857609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-857609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.541574237s)
helpers_test.go:175: Cleaning up "missing-upgrade-857609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-857609
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-857609: (4.477901138s)
--- PASS: TestMissingContainerUpgrade (112.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-050176 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-050176 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (105.122906ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-050176] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-050176 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-050176 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.558144787s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-050176 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-050176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-050176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.62639322s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-050176 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-050176 status -o json: exit status 2 (307.904513ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-050176","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-050176
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-050176: (1.869978833s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-050176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-050176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.624810953s)
--- PASS: TestNoKubernetes/serial/Start (10.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-050176 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-050176 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.227121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-050176
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-050176: (1.307284995s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-050176 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-050176 --driver=docker  --container-runtime=crio: (7.700796404s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-050176 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-050176 "sudo systemctl is-active --quiet service kubelet": exit status 1 (397.222964ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (61.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1342199068 start -p stopped-upgrade-151473 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1342199068 start -p stopped-upgrade-151473 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.434195251s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1342199068 -p stopped-upgrade-151473 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1342199068 -p stopped-upgrade-151473 stop: (1.291794245s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-151473 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1002 07:44:28.908265  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-151473 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.319465171s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (61.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-151473
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-151473: (1.257874903s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestPause/serial/Start (82.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-422707 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1002 07:46:41.264266  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-422707 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.933025849s)
--- PASS: TestPause/serial/Start (82.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-422707 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-422707 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.415789337s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-810803 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-810803 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (192.232519ms)

                                                
                                                
-- stdout --
	* [false-810803] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:49:32.034298  469419 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:49:32.034472  469419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:49:32.034504  469419 out.go:374] Setting ErrFile to fd 2...
	I1002 07:49:32.034528  469419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:49:32.034836  469419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-292504/.minikube/bin
	I1002 07:49:32.035359  469419 out.go:368] Setting JSON to false
	I1002 07:49:32.036270  469419 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9123,"bootTime":1759382249,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1002 07:49:32.036376  469419 start.go:140] virtualization:  
	I1002 07:49:32.039954  469419 out.go:179] * [false-810803] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:49:32.043687  469419 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:49:32.043778  469419 notify.go:220] Checking for updates...
	I1002 07:49:32.049810  469419 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:49:32.052795  469419 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-292504/kubeconfig
	I1002 07:49:32.055769  469419 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-292504/.minikube
	I1002 07:49:32.058617  469419 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:49:32.061492  469419 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:49:32.064814  469419 config.go:182] Loaded profile config "force-systemd-flag-275910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 07:49:32.064940  469419 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:49:32.093314  469419 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:49:32.093443  469419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:49:32.151955  469419 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:49:32.142981951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:49:32.152067  469419 docker.go:318] overlay module found
	I1002 07:49:32.155198  469419 out.go:179] * Using the docker driver based on user configuration
	I1002 07:49:32.158182  469419 start.go:304] selected driver: docker
	I1002 07:49:32.158205  469419 start.go:924] validating driver "docker" against <nil>
	I1002 07:49:32.158219  469419 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:49:32.161782  469419 out.go:203] 
	W1002 07:49:32.164811  469419 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 07:49:32.167755  469419 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-810803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-810803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-810803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-810803"

                                                
                                                
----------------------- debugLogs end: false-810803 [took: 3.290119263s] --------------------------------
helpers_test.go:175: Cleaning up "false-810803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-810803
--- PASS: TestNetworkPlugins/group/false (3.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1002 07:59:28.908392  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (58.980568959s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-356986 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [511ea254-6098-48d3-9677-8672c1681171] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [511ea254-6098-48d3-9677-8672c1681171] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.008117707s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-356986 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-356986 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-356986 --alsologtostderr -v=3: (11.908495161s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-356986 -n old-k8s-version-356986
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-356986 -n old-k8s-version-356986: exit status 7 (71.948548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-356986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-356986 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.43885076s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-356986 -n old-k8s-version-356986
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-45gx5" [b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004427673s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-45gx5" [b3d3d617-491d-4ea5-b0cd-fbc9bfb09ba1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003409349s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-356986 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-356986 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 08:01:41.264511  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m19.088675178s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.655527846s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-604182 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [79649b38-6d08-4670-b939-ea8b9b38a4ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [79649b38-6d08-4670-b939-ea8b9b38a4ad] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003374153s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-604182 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-604182 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-604182 --alsologtostderr -v=3: (11.922229683s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-604182 -n no-preload-604182
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-604182 -n no-preload-604182: exit status 7 (74.010011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-604182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-604182 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.957817918s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-604182 -n no-preload-604182
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-171347 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [16034ac6-463d-44ce-8c88-afe3eeeec748] Pending
helpers_test.go:352: "busybox" [16034ac6-463d-44ce-8c88-afe3eeeec748] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [16034ac6-463d-44ce-8c88-afe3eeeec748] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003943141s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-171347 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-171347 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-171347 --alsologtostderr -v=3: (12.300073478s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171347 -n embed-certs-171347
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171347 -n embed-certs-171347: exit status 7 (77.849998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-171347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-171347 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (58.443687058s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171347 -n embed-certs-171347
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dmlvr" [7a97e796-8ad8-47b3-8086-2f9a8da34762] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006161409s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dmlvr" [7a97e796-8ad8-47b3-8086-2f9a8da34762] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004726098s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-604182 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-604182 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 08:04:28.907577  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m21.323673906s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lph8n" [732dd77c-3a1a-4f39-be41-fee9623149cf] Running
E1002 08:04:51.157048  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:51.163422  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:51.174772  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:51.196377  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:51.237742  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:51.319355  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:51.480803  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:51.802413  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004222921s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lph8n" [732dd77c-3a1a-4f39-be41-fee9623149cf] Running
E1002 08:04:52.444620  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:53.726779  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:04:56.288226  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003526913s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-171347 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-171347 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 08:05:11.651453  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:05:32.132769  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.452745335s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-417078 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c863efda-3502-432b-8d0a-03bbb8b70f5e] Pending
helpers_test.go:352: "busybox" [c863efda-3502-432b-8d0a-03bbb8b70f5e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c863efda-3502-432b-8d0a-03bbb8b70f5e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004379947s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-417078 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-009374 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-009374 --alsologtostderr -v=3: (1.304544897s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009374 -n newest-cni-009374
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009374 -n newest-cni-009374: exit status 7 (110.691288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-009374 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-009374 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.826756877s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-009374 -n newest-cni-009374
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-417078 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-417078 --alsologtostderr -v=3: (11.975807851s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078: exit status 7 (159.876194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-417078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-417078 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.48848091s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-417078 -n default-k8s-diff-port-417078
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-009374 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1002 08:06:41.264148  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.16080486s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zm2mb" [60d8096f-e9e3-4d0e-8f16-67ab47b4563e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003560624s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zm2mb" [60d8096f-e9e3-4d0e-8f16-67ab47b4563e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003484061s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-417078 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-417078 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1002 08:07:35.016551  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:45.608979  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:45.615375  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:45.626777  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:45.648173  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:45.689577  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:45.770985  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:45.932747  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:46.254813  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:46.896350  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:07:48.178466  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.005736858s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-810803 "pgrep -a kubelet"
I1002 08:07:49.378925  294357 config.go:182] Loaded profile config "auto-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-810803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vvmc6" [f65166ac-a1b9-42c9-836e-fa6b24485881] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 08:07:50.740010  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vvmc6" [f65166ac-a1b9-42c9-836e-fa6b24485881] Running
E1002 08:07:55.862136  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003031822s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-810803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.575857672s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-n6j6g" [d5801598-dda7-4f9a-8892-fec144d0bafd] Running
E1002 08:08:26.585044  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/no-preload-604182/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004864207s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-810803 "pgrep -a kubelet"
I1002 08:08:32.136764  294357 config.go:182] Loaded profile config "flannel-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-810803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tsxl2" [5d7d3cfc-5f41-4e3e-aa61-cafa1bd07fbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tsxl2" [5d7d3cfc-5f41-4e3e-aa61-cafa1bd07fbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005049522s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-810803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1002 08:09:28.908269  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (51.264388864s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-nkq5q" [c02e60a3-cf49-43c4-b7be-4e867af131a8] Running
E1002 08:09:51.156898  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/old-k8s-version-356986/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003624555s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-810803 "pgrep -a kubelet"
I1002 08:09:55.293962  294357 config.go:182] Loaded profile config "kindnet-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-810803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l89wn" [e634a5da-7762-43c6-a10b-ec3aa48cabc3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l89wn" [e634a5da-7762-43c6-a10b-ec3aa48cabc3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004157434s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-810803 "pgrep -a kubelet"
I1002 08:10:01.090831  294357 config.go:182] Loaded profile config "enable-default-cni-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-810803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pksts" [d1ee4973-e7a9-45b8-90de-fbb6f1c3f9c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pksts" [d1ee4973-e7a9-45b8-90de-fbb6f1c3f9c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004992056s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-810803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-810803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m24.138532193s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1002 08:10:44.400499  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:44.406900  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:44.419778  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:44.441582  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:44.483115  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:44.564965  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:44.726879  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:45.052223  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:45.694010  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:46.978731  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:49.540960  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:54.662420  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:11:04.904256  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:11:25.385691  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.720939183s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-klqdf" [50c91e9a-b337-4e6f-879b-8905c468f197] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1002 08:11:41.264323  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/functional-615837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-klqdf" [50c91e9a-b337-4e6f-879b-8905c468f197] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004194493s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-810803 "pgrep -a kubelet"
I1002 08:11:47.432381  294357 config.go:182] Loaded profile config "calico-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-810803 replace --force -f testdata/netcat-deployment.yaml
I1002 08:11:47.717451  294357 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tfptp" [900c9611-0326-4bca-bb5e-5582acf59fad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tfptp" [900c9611-0326-4bca-bb5e-5582acf59fad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003854503s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-810803 "pgrep -a kubelet"
I1002 08:11:54.081701  294357 config.go:182] Loaded profile config "bridge-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-810803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m8cz6" [e7e321eb-2482-4594-904f-8c58f9d89839] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m8cz6" [e7e321eb-2482-4594-904f-8c58f9d89839] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003800425s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-810803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-810803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-810803 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.947221966s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-810803 "pgrep -a kubelet"
I1002 08:13:22.873399  294357 config.go:182] Loaded profile config "custom-flannel-810803": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-810803 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-986tl" [532f5ba8-1bfe-4904-a980-f0bf78ede724] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 08:13:25.847284  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:25.856443  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:25.867802  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:25.889144  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:25.930477  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:26.011909  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:26.173413  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:26.495305  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:27.137392  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-986tl" [532f5ba8-1bfe-4904-a980-f0bf78ede724] Running
E1002 08:13:28.269787  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/default-k8s-diff-port-417078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:28.419564  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:30.690849  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/auto-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:30.981496  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/flannel-810803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003366871s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-810803 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-810803 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.68s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-396070 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-396070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-396070
--- SKIP: TestDownloadOnlyKic (0.68s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-466206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-466206
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E1002 07:49:28.907946  294357 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-292504/.minikube/profiles/addons-067378/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: kubenet-810803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-810803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-810803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-810803"

                                                
                                                
----------------------- debugLogs end: kubenet-810803 [took: 3.341549597s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-810803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-810803
--- SKIP: TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-810803 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-810803" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-810803

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-810803" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810803"

                                                
                                                
----------------------- debugLogs end: cilium-810803 [took: 3.707242244s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-810803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-810803
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
Copied to clipboard